Files
MachineLearningNotebooks/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.ipynb

640 lines
18 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Manage runs\n",
"\n",
"## Table of contents\n",
"\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Start, monitor and complete a run](#Start,-monitor-and-complete-a-run)\n",
"1. [Add properties and tags](#Add-properties-and-tags)\n",
"1. [Query properties and tags](#Query-properties-and-tags)\n",
"1. [Start and query child runs](#Start-and-query-child-runs)\n",
"1. [Cancel or fail runs](#Cancel-or-fail-runs)\n",
"1. [Reproduce a run](#Reproduce-a-run)\n",
"1. [Next steps](#Next-steps)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"When you're building enterprise-grade machine learning models, it is important to track, organize, monitor and reproduce your training runs. For example, you might want to trace the lineage behind a model deployed to production, and re-run the training experiment to troubleshoot issues. \n",
"\n",
"This notebooks shows examples how to use Azure Machine Learning services to manage your training runs."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. Also, if you're new to Azure ML, we recommend that you go through [the tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-train-models-with-aml) first to learn the basic concepts.\n",
"\n",
"Let's first import required packages, check Azure ML SDK version, connect to your workspace and create an Experiment to hold the runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace, Experiment, Run\n",
"from azureml.core import ScriptRunConfig\n",
"\n",
"print(azureml.core.VERSION)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"exp = Experiment(workspace=ws, name=\"explore-runs\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start, monitor and complete a run\n",
"\n",
"A run is an unit of execution, typically to train a model, but for other purposes as well, such as loading or transforming data. Runs are tracked by Azure ML service, and can be instrumented with metrics and artifact logging.\n",
"\n",
"A simplest way to start a run in your interactive Python session is to call *Experiment.start_logging* method. You can then log metrics from within the run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"notebook_run = exp.start_logging()\n",
"\n",
"notebook_run.log(name=\"message\", value=\"Hello from run!\")\n",
"\n",
"print(notebook_run.get_status())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use *get_status method* to get the status of the run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(notebook_run.get_status())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Also, you can simply enter the run to get a link to Azure Portal details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"notebook_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Method *get_details* gives you more details on the run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"notebook_run.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use *complete* method to end the run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"notebook_run.complete()\n",
"print(notebook_run.get_status())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also use Python's *with...as* pattern. The run will automatically complete when moving out of scope. This way you don't need to manually complete the run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with exp.start_logging() as notebook_run:\n",
" notebook_run.log(name=\"message\", value=\"Hello from run!\")\n",
" print(\"Is it still running?\",notebook_run.get_status())\n",
" \n",
"print(\"Has it completed?\",notebook_run.get_status())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's look at submitting a run as a separate Python process. To keep the example simple, we submit the run on local computer. Other targets could include remote VMs and Machine Learning Compute clusters in your Azure ML Workspace.\n",
"\n",
"We use *hello.py* script as an example. To perform logging, we need to get a reference to the Run instance from within the scope of the script. We do this using *Run.get_context* method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!more hello.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Submitted runs take a snapshot of the *source_directory* to use when executing. You can control which files are available to the run by using an *.amlignore* file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile .amlignore\n",
"# Exclude the outputs directory automatically created by our earlier runs.\n",
"/outputs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's submit the run on a local computer. A standard pattern in Azure ML SDK is to create run configuration, and then use *Experiment.submit* method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_config = ScriptRunConfig(source_directory='.', script='hello.py')\n",
"\n",
"local_script_run = exp.submit(run_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can view the status of the run as before"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(local_script_run.get_status())\n",
"local_script_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Submitted runs have additional log files you can inspect using *get_details_with_logs*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_script_run.get_details_with_logs()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use *wait_for_completion* method to block the local execution until remote run is complete."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_script_run.wait_for_completion(show_output=True)\n",
"print(local_script_run.get_status())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add properties and tags\n",
"\n",
"Properties and tags help you organize your runs. You can use them to describe, for example, who authored the run, what the results were, and what machine learning approach was used. And as you'll later learn, properties and tags can be used to query the history of your runs to find the important ones.\n",
"\n",
"For example, let's add \"author\" property to the run:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_script_run.add_properties({\"author\":\"azureml-user\"})\n",
"print(local_script_run.get_properties())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Properties are immutable. Once you assign a value it cannot be changed, making them useful as a permanent record for auditing purposes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"try:\n",
" local_script_run.add_properties({\"author\":\"different-user\"})\n",
"except Exception as e:\n",
" print(e)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Tags on the other hand can be changed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_script_run.tag(\"quality\", \"great run\")\n",
"print(local_script_run.get_tags())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_script_run.tag(\"quality\", \"fantastic run\")\n",
"print(local_script_run.get_tags())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also add a simple string tag. It appears in the tag dictionary with value of None"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_script_run.tag(\"worth another look\")\n",
"print(local_script_run.get_tags())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Query properties and tags\n",
"\n",
"You can query runs within an experiment that match specific properties and tags."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"list(exp.get_runs(properties={\"author\":\"azureml-user\"},tags={\"quality\":\"fantastic run\"}))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"list(exp.get_runs(properties={\"author\":\"azureml-user\"},tags=\"worth another look\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start and query child runs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use child runs to group together related runs, for example different hyperparameter tuning iterations.\n",
"\n",
"Let's use *hello_with_children* script to create a batch of 5 child runs from within a submitted run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!more hello_with_children.py"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_config = ScriptRunConfig(source_directory='.', script='hello_with_children.py')\n",
"\n",
"local_script_run = exp.submit(run_config)\n",
"local_script_run.wait_for_completion(show_output=True)\n",
"print(local_script_run.get_status())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can start child runs one by one. Note that this is less efficient than submitting a batch of runs, because each creation results in a network call.\n",
"\n",
"Child runs too complete automatically as they move out of scope."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with exp.start_logging() as parent_run:\n",
" for c,count in enumerate(range(5)):\n",
" with parent_run.child_run() as child:\n",
" child.log(name=\"Hello from child run\", value=c)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To query the child runs belonging to specific parent, use *get_children* method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"list(parent_run.get_children())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cancel or fail runs\n",
"\n",
"Sometimes, you realize that the run is not performing as intended, and you want to cancel it instead of waiting for it to complete.\n",
"\n",
"As an example, let's create a Python script with a delay in the middle."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!more hello_with_delay.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use *cancel* method to cancel a run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_config = ScriptRunConfig(source_directory='.', script='hello_with_delay.py')\n",
"\n",
"local_script_run = exp.submit(run_config)\n",
"print(\"Did the run start?\",local_script_run.get_status())\n",
"local_script_run.cancel()\n",
"print(\"Did the run cancel?\",local_script_run.get_status())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also mark an unsuccessful run as failed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_script_run = exp.submit(run_config)\n",
"local_script_run.fail()\n",
"print(local_script_run.get_status())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reproduce a run\n",
"\n",
"When updating or troubleshooting on a model deployed to production, you sometimes need to revisit the original training run that produced the model. To help you with this, Azure ML service by default creates snapshots of your scripts a the time of run submission:\n",
"\n",
"You can use *restore_snapshot* to obtain a zip package of the latest snapshot of the script folder. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_script_run.restore_snapshot(path=\"snapshots\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then extract the zip package, examine the code, and submit your run again."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
" * To learn more about logging APIs, see [logging API notebook](./logging-api/logging-api.ipynb)\n",
" * To learn more about remote runs, see [train on AML compute notebook](./train-on-amlcompute/train-on-amlcompute.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"category": "training",
"compute": [
"Local"
],
"datasets": [
"None"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"framework": [
"None"
],
"friendly_name": "Managing your training runs",
"index_order": 2,
"kernelspec": {
"display_name": "Python 3.8 - AzureML",
"language": "python",
"name": "python38-azureml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"tags": [
"None"
],
"task": "Monitor and complete runs"
},
"nbformat": 4,
"nbformat_minor": 2
}