{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Copyright (c) Microsoft Corporation. All rights reserved.\n", "\n", "Licensed under the MIT License." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-on-local/train-on-local.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 02. Train locally\n", "_**Train a model locally: Directly on your machine and within a Docker container**_\n", "\n", "---\n", "\n", "\n", "## Table of contents\n", "1. [Introduction](#intro)\n", "1. [Pre-requisites](#pre-reqs)\n", "1. [Initialize Workspace](#init)\n", "1. [Create An Experiment](#exp)\n", "1. [View training and auxiliary scripts](#view)\n", "1. [Configure & Run](#config-run)\n", " 1. User-managed environment\n", " 1. Set the environment up\n", " 1. Submit the script to run in the user-managed environment\n", " 1. Get run history details\n", " 1. System-managed environment\n", " 1. Set the environment up\n", " 1. Submit the script to run in the system-managed environment\n", " 1. Get run history details\n", " 1. Docker-based execution\n", " 1. Set the environment up\n", " 1. Submit the script to run in the system-managed environment\n", " 1. Get run history details\n", " 1. Use a custom Docker image\n", "1. [Query run metrics](#query)\n", "\n", "---\n", "\n", "## 1. Introduction \n", "\n", "In this notebook, we will learn how to:\n", "\n", "* Connect to our AML workspace\n", "* Create or load a workspace\n", "* Configure & execute a local run in:\n", " - a user-managed Python environment\n", " - a system-managed Python environment\n", " - a Docker environment\n", "* Query run metrics to find the best model trained in the run\n", "* Register that model for operationalization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Pre-requisites \n", "In this notebook, we assume that you have set your Azure Machine Learning workspace. If you have not, make sure you go through the [configuration notebook](../../../configuration.ipynb) first. In the end, you should have configuration file that contains the subscription ID, resource group and name of your workspace." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Check core SDK version number\n", "import azureml.core\n", "\n", "print(\"SDK version:\", azureml.core.VERSION)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Initialize Workspace \n", "\n", "Initialize your workspace object from configuration file" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azureml.core.workspace import Workspace\n", "\n", "ws = Workspace.from_config()\n", "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Create An Experiment \n", "An experiment is a logical container in an Azure ML Workspace. It contains a series of trials called `Runs`. As such, it hosts run records such as run metrics, logs, and other output artifacts from your experiments." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azureml.core import Experiment\n", "experiment_name = 'train-on-local'\n", "exp = Experiment(workspace=ws, name=experiment_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. View training and auxiliary scripts \n", "\n", "For convenience, we already created the training (`train.py`) script and supportive libraries (`mylib.py`) for you. Take a few minutes to examine both files." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with open('./train.py', 'r') as f:\n", " print(f.read())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with open('./mylib.py', 'r') as f:\n", " print(f.read())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. Configure & Run \n", "### 6.A User-managed environment\n", "\n", "#### 6.A.a Set the environment up\n", "When using a user-managed environment, you are responsible for ensuring that all the necessary packages are available in the Python environment you choose to run the script in." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azureml.core import Environment\n", "\n", "# Editing a run configuration property on-fly.\n", "user_managed_env = Environment(\"user-managed-env\")\n", "\n", "user_managed_env.python.user_managed_dependencies = True\n", "\n", "# You can choose a specific Python environment by pointing to a Python path \n", "#user_managed_env.python.interpreter_path = '/home/johndoe/miniconda3/envs/myenv/bin/python'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.A.b Submit the script to run in the user-managed environment\n", "Whatever the way you manage your environment, you need to use the `ScriptRunConfig` class. It allows you to further configure your run by pointing to the `train.py` script and to the working directory, which also contains the `mylib.py` file. These inputs indeed provide the commands to execute in the run. Once the run is configured, you submit it to your experiment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azureml.core import ScriptRunConfig\n", "\n", "src = ScriptRunConfig(source_directory='./', script='train.py')\n", "src.run_config.environment = user_managed_env" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run = exp.submit(src)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.A.c Get run history details\n", "\n", "While all calculations were run on your machine (cf. below), by using a `run` you also captured the results of your calculations into your run and experiment. You can then see them on the Azure portal, through the link displayed as output of the following cell.\n", "\n", "**Note**: The recording of the computation results into your run was made possible by the `run.log()` commands in the `train.py` file." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Block any execution to wait until the run finishes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run.wait_for_completion(show_output=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** All these calculations were run on your local machine, in the conda environment you defined above. You can find the results in:\n", "- `~/.azureml/envs/azureml_xxxx` for the conda environment you just created\n", "- `~/AppData/Local/Temp/azureml_runs/train-on-local_xxxx` for the machine learning models you trained (this path may differ depending on the platform you use). This folder also contains\n", " - Logs (under azureml_logs/)\n", " - Output pickled files (under outputs/)\n", " - The configuration files (credentials, local and docker image setups)\n", " - The train.py and mylib.py scripts\n", " - The current notebook\n", "\n", "Take a few minutes to examine the output of the cell above. It shows the content of some of the log files, and extra information on the conda environment used." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 6.B System-managed environment\n", "#### 6.B.a Set the environment up\n", "Now, instead of managing the setup of the environment yourself, you can ask the system to build a new conda environment for you. The environment is built once, and will be reused in subsequent executions as long as the conda dependencies remain unchanged." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azureml.core.conda_dependencies import CondaDependencies\n", "\n", "system_managed_env = Environment(\"system-managed-env\")\n", "\n", "system_managed_env.python.user_managed_dependencies = False\n", "\n", "# Specify conda dependencies with scikit-learn\n", "cd = CondaDependencies.create(conda_packages=['scikit-learn'])\n", "system_managed_env.python.conda_dependencies = cd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.B.b Submit the script to run in the system-managed environment\n", "A new conda environment is built based on the conda dependencies object. If you are running this for the first time, this might take up to 5 minutes.\n", "\n", "The commands used to execute the run are then the same as the ones you used above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "src.run_config.environment = system_managed_env\n", "run = exp.submit(src)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.B.c Get run history details" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run.wait_for_completion(show_output = True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 6.C Docker-based execution\n", "In this section, you will train the same models, but you will do so in a Docker container, on your local machine. For this, you then need to have the Docker engine installed locally. If you don't have it yet, please follow the instructions below.\n", "\n", "#### How to install Docker\n", "\n", "- [Linux](https://docs.docker.com/install/linux/docker-ce/ubuntu/)\n", "- [MacOs](https://docs.docker.com/docker-for-mac/install/)\n", "- [Windows](https://docs.docker.com/docker-for-windows/install/)\n", "\n", " In case of issues, troubleshooting documentation can be found [here](https://docs.docker.com/docker-for-windows/troubleshoot/#running-docker-for-windows-in-nested-virtualization-scenarios). Additionally, you can follow the steps below, if Virtualization is not enabled on your machine:\n", " - Go to Task Manager > Performance\n", " - Check that Virtualization is enabled\n", " - If it is not, go to `Start > Settings > Update and security > Recovery > Advanced Startup - Restart now > Troubleshoot > Advanced options > UEFI firmware settings - restart`\n", " - In the BIOS, go to `Advanced > System options > Click the \"Virtualization Technology (VTx)\" only > Save > Exit > Save all changes` -- This will restart the machine\n", "\n", "**Notes**: \n", "- If your kernel is already running in a Docker container, such as **Azure Notebooks**, this mode will **NOT** work.\n", "- If you use a GPU base image, it needs to be used on Microsoft Azure Services such as ACI, AML Compute, Azure VMs, or AKS.\n", "\n", "You can also ask the system to pull down a Docker image and execute your scripts in it.\n", "\n", "#### 6.C.a Set the environment up\n", "\n", "In the cell below, you will configure your run to execute in a Docker container. It will:\n", "- run on a CPU\n", "- contain a conda environment in which the scikit-learn library will be installed.\n", "\n", "As before, you will finish configuring your run by pointing to the `train.py` and `mylib.py` files." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "docker_env = Environment(\"docker-env\")\n", "\n", "docker_env.python.user_managed_dependencies = False\n", "docker_env.docker.enabled = True\n", "\n", "# use the default CPU-based Docker image from Azure ML\n", "print(docker_env.docker.base_image)\n", "\n", "# Specify conda dependencies with scikit-learn\n", "docker_env.python.conda_dependencies = cd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.C.b Submit the script to run in the system-managed environment\n", "\n", "The run is now configured and ready to be executed in a Docker container. If you are running this for the first time, the Docker container will get created, as well as the conda environment inside it. This will take several minutes. Once all this is generated, however, this conda environment will be reused as long as you don't change the conda dependencies." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import subprocess\n", "\n", "src.run_config.environment = docker_env\n", "\n", "# Check if Docker is installed and Linux containers are enabled\n", "if subprocess.run(\"docker -v\", shell=True).returncode == 0:\n", " out = subprocess.check_output(\"docker system info\", shell=True).decode('ascii')\n", " if not \"OSType: linux\" in out:\n", " print(\"Switch Docker engine to use Linux containers.\")\n", " else:\n", " run = exp.submit(src)\n", "else:\n", " print(\"Docker engine is not installed.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Potential issue on Windows and how to solve it\n", "\n", "If you are using a Windows machine, the creation of the Docker image may fail, and you may see the following error message\n", "`docker: Error response from daemon: Drive has not been shared. Failed to launch docker container. Check that docker is running and that C:\\ on Windows and /tmp elsewhere is shared.`\n", "\n", "This is because the process above tries to create a linux-based, i.e. non-windows-based, Docker image. To fix this, you can:\n", "- Open the Docker user interface\n", "- Navigate to Settings > Shared drives\n", "- Select C (or both C and D, if you have one)\n", "- Apply\n", "\n", "When this is done, you can try and re-run the command above.\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.C.c Get run history details" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get run history details\n", "run" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run.wait_for_completion(show_output=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The results obtained here should be the same as those obtained before. However, take a look at the \"Execution summary\" section in the output of the cell above. Look for \"docker\". There, you should see the \"enabled\" field set to True. Compare this to the 2 prior runs (\"enabled\" was then set to False)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.C.d Use a custom Docker image\n", "\n", "You can also specify a custom Docker image, if you don't want to use the default image provided by Azure ML.\n", "\n", "You can either pull an image directly from Anaconda:\n", "```python\n", "# Use an image available in Docker Hub without authentication\n", "run_config_docker.environment.docker.base_image = \"continuumio/miniconda3\"\n", "```\n", "\n", "Or one of the images you may already have created:\n", "```python\n", "# or, use an image available in your private Azure Container Registry\n", "run_config_docker.environment.docker.base_image = \"mycustomimage:1.0\"\n", "run_config_docker.environment.docker.base_image_registry.address = \"myregistry.azurecr.io\"\n", "run_config_docker.environment.docker.base_image_registry.username = \"username\"\n", "run_config_docker.environment.docker.base_image_registry.password = \"password\"\n", "```\n", "\n", "##### Where to find my Docker image name and registry credentials\n", " If you do not know what the name of your Docker image or container registry is, or if you don't know how to access the username and password needed above, proceed as follows:\n", " - Docker image name:\n", " - In the portal, under your resource group, click on your current workspace\n", " - Click on Experiments\n", " - Click on Images\n", " - Click on the image of your choice\n", " - Copy the \"ID\" string\n", " - In this notebook, replace \"mycustomimage:1/0\" with that ID string\n", " - Username and password:\n", " - In the portal, under your resource group, click on the container registry associated with your workspace\n", " - If you have several and don't know which one you need, click on your workspace, go to Overview and click on the \"Registry\" name on the upper right of the screen\n", " - There, go to \"Access keys\"\n", " - Copy the username and one of the passwords\n", " - In this notebook, replace \"username\" and \"password\" by these values\n", "\n", "In any case, you will need to use the lines above in place of the line marked as `# Reference Docker image` in section 6.C.a. \n", "\n", "When you are using your custom Docker image, you might already have your Python environment properly set up. In that case, you can skip specifying conda dependencies, and just use the `user_managed_dependencies` option instead:\n", "```python\n", "run_config_docker.environment.python.user_managed_dependencies = True\n", "# path to the Python environment in the custom Docker image\n", "run_config.environment.python.interpreter_path = '/opt/conda/bin/python'\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 7. Query run metrics \n", "\n", "Once your run has completed, you can now extract the metrics you captured by using the `get_metrics` method. As shown in the `train.py` file, these metrics are \"alpha\" and \"mse\"." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "query history", "get metrics" ] }, "outputs": [], "source": [ "# Get all metris logged in the run\n", "run.get_metrics()\n", "metrics = run.get_metrics()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's find the model that has the lowest MSE value logged." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "best_alpha = metrics['alpha'][np.argmin(metrics['mse'])]\n", "\n", "print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(\n", " min(metrics['mse']), \n", " best_alpha\n", "))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's compare it to the others" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "import matplotlib\n", "import matplotlib.pyplot as plt\n", "\n", "plt.plot(metrics['alpha'], metrics['mse'], marker='o')\n", "plt.ylabel(\"MSE\")\n", "plt.xlabel(\"Alpha\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also list all the files that are associated with this run record" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run.get_file_names()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the results obtained above, `ridge_0.40.pkl` is the best performing model. You can now register that particular model with the workspace. Once you have done so, go back to the portal and click on \"Models\". You should see it there." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Supply a model name, and the full path to the serialized model file.\n", "model = run.register_model(model_name='best_ridge_model', model_path='./outputs/ridge_0.40.pkl')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Registered model:\\n --> Name: {}\\n --> Version: {}\\n --> URL: {}\".format(model.name, model.version, model.url))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can now deploy your model by following [this example](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb)." ] } ], "metadata": { "authors": [ { "name": "roastala" } ], "kernelspec": { "display_name": "Python 3.6", "language": "python", "name": "python36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" }, "friendly_name": "Train on local compute", "exclude_from_index": false, "index_order": 1, "category": "training", "task": "Train a model locally", "datasets": [ "Diabetes" ], "compute": [ "Local" ], "deployment": [ "None" ], "framework": [ "None" ], "tags": [ "None" ] }, "nbformat": 4, "nbformat_minor": 2 }