Files
MachineLearningNotebooks/tutorials/img-classification-part1-training.ipynb
2018-12-03 17:38:20 -05:00

718 lines
25 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial #1: Train an image classification model with Azure Machine Learning\n",
"\n",
"In this tutorial, you train a machine learning model both locally and on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning service (preview) in a Python Jupyter notebook. You can then use the notebook as a template to train your own machine learning model with your own data. This tutorial is **part one of a two-part tutorial series**. \n",
"\n",
"This tutorial trains a simple logistic regression using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](http://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing a number from 0 to 9. The goal is to create a multi-class classifier to identify the digit a given image represents. \n",
"\n",
"Learn how to:\n",
"\n",
"> * Set up your development environment\n",
"> * Access and examine the data\n",
"> * Train a simple logistic regression model locally using the popular scikit-learn machine learning library \n",
"> * Train multiple models on a remote cluster\n",
"> * Review training results, find and register the best model\n",
"\n",
"You'll learn how to select a model and deploy it in [part two of this tutorial](deploy-models.ipynb) later. \n",
"\n",
"## Prerequisites\n",
"\n",
"Use [these instructions](https://aka.ms/aml-how-to-configure-environment) to: \n",
"* Create a workspace and its configuration file (**config.json**) \n",
"* Save your **config.json** to the same folder as this notebook"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up your development environment\n",
"\n",
"All the setup for your development work can be accomplished in a Python notebook. Setup includes:\n",
"\n",
"* Importing Python packages\n",
"* Connecting to a workspace to enable communication between your local computer and remote resources\n",
"* Creating an experiment to track all your runs\n",
"* Creating a remote compute target to use for training\n",
"\n",
"### Import packages\n",
"\n",
"Import Python packages you need in this session. Also display the Azure Machine Learning SDK version."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"check version"
]
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import numpy as np\n",
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import azureml\n",
"from azureml.core import Workspace, Run\n",
"\n",
"# check core SDK version number\n",
"print(\"Azure ML SDK Version: \", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to workspace\n",
"\n",
"Create a workspace object from the existing workspace. `Workspace.from_config()` reads the file **config.json** and loads the details into an object named `ws`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"load workspace"
]
},
"outputs": [],
"source": [
"# load workspace configuration from the config.json file in the current folder.\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.location, ws.resource_group, ws.location, sep = '\\t')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create experiment\n",
"\n",
"Create an experiment to track the runs in your workspace. A workspace can have muliple experiments. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create experiment"
]
},
"outputs": [],
"source": [
"experiment_name = 'sklearn-mnist'\n",
"\n",
"from azureml.core import Experiment\n",
"exp = Experiment(workspace=ws, name=experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
"\n",
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create mlc",
"amlcompute"
]
},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"import os\n",
"\n",
"# choose a name for your cluster\n",
"compute_name = os.environ.get(\"AML_COMPUTE_CLUSTER_NAME\", \"cpucluster\")\n",
"compute_min_nodes = os.environ.get(\"AML_COMPUTE_CLUSTER_MIN_NODES\", 0)\n",
"compute_max_nodes = os.environ.get(\"AML_COMPUTE_CLUSTER_MAX_NODES\", 4)\n",
"\n",
"# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6\n",
"vm_size = os.environ.get(\"AML_COMPUTE_CLUSTER_SKU\", \"STANDARD_D2_V2\")\n",
"\n",
"\n",
"if compute_name in ws.compute_targets:\n",
" compute_target = ws.compute_targets[compute_name]\n",
" if compute_target and type(compute_target) is AmlCompute:\n",
" print('found compute target. just use it. ' + compute_name)\n",
"else:\n",
" print('creating a new compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = vm_size,\n",
" min_nodes = compute_min_nodes, \n",
" max_nodes = compute_max_nodes)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n",
" \n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
" # For a more detailed view of current AmlCompute status, use the 'status' property \n",
" print(compute_target.status.serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You now have the necessary packages and compute resources to train a model in the cloud. \n",
"\n",
"## Explore data\n",
"\n",
"Before you train a model, you need to understand the data that you are using to train it. You also need to copy the data into the cloud so it can be accessed by your cloud training environment. In this section you learn how to:\n",
"\n",
"* Download the MNIST dataset\n",
"* Display some sample images\n",
"* Upload data to the cloud\n",
"\n",
"### Download the MNIST dataset\n",
"\n",
"Download the MNIST dataset and save the files into a `data` directory locally. Images and labels for both training and testing are downloaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import urllib.request\n",
"\n",
"os.makedirs('./data', exist_ok = True)\n",
"\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename='./data/train-images.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename='./data/train-labels.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename='./data/test-images.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename='./data/test-labels.gz')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Display some sample images\n",
"\n",
"Load the compressed files into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them. Note this step requires a `load_data` function that's included in an `util.py` file. This file is included in the sample folder. Please make sure it is placed in the same folder as this notebook. The `load_data` function simply parses the compresse files into numpy arrays."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# make sure utils.py is in the same directory as this code\n",
"from utils import load_data\n",
"\n",
"# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.\n",
"X_train = load_data('./data/train-images.gz', False) / 255.0\n",
"y_train = load_data('./data/train-labels.gz', True).reshape(-1)\n",
"\n",
"X_test = load_data('./data/test-images.gz', False) / 255.0\n",
"y_test = load_data('./data/test-labels.gz', True).reshape(-1)\n",
"\n",
"# now let's show some randomly chosen images from the traininng set.\n",
"count = 0\n",
"sample_size = 30\n",
"plt.figure(figsize = (16, 6))\n",
"for i in np.random.permutation(X_train.shape[0])[:sample_size]:\n",
" count = count + 1\n",
" plt.subplot(1, sample_size, count)\n",
" plt.axhline('')\n",
" plt.axvline('')\n",
" plt.text(x=10, y=-10, s=y_train[i], fontsize=18)\n",
" plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you have an idea of what these images look like and the expected prediction outcome.\n",
"\n",
"### Upload data to the cloud\n",
"\n",
"Now make the data accessible remotely by uploading that data from your local machine into Azure so it can be accessed for remote training. The datastore is a convenient construct associated with your workspace for you to upload/download data, and interact with it from your remote compute targets. It is backed by Azure blob storage account.\n",
"\n",
"The MNIST files are uploaded into a directory named `mnist` at the root of the datastore."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"use datastore"
]
},
"outputs": [],
"source": [
"ds = ws.get_default_datastore()\n",
"print(ds.datastore_type, ds.account_name, ds.container_name)\n",
"\n",
"ds.upload(src_dir='./data', target_path='mnist', overwrite=True, show_progress=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You now have everything you need to start training a model. \n",
"\n",
"## Train a local model\n",
"\n",
"Train a simple logistic regression model using scikit-learn locally.\n",
"\n",
"**Training locally can take a minute or two** depending on your computer configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"from sklearn.linear_model import LogisticRegression\n",
"\n",
"clf = LogisticRegression()\n",
"clf.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, make predictions using the test set and calculate the accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_hat = clf.predict(X_test)\n",
"print(np.average(y_hat == y_test))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With just a few lines of code, you have a 92% accuracy.\n",
"\n",
"## Train on a remote cluster\n",
"\n",
"Now you can expand on this simple model by building a model with a different regularization rate. This time you'll train the model on a remote resource. \n",
"\n",
"For this task, submit the job to the remote training cluster you set up earlier. To submit a job you:\n",
"* Create a directory\n",
"* Create a training script\n",
"* Create an estimator object\n",
"* Submit the job \n",
"\n",
"### Create a directory\n",
"\n",
"Create a directory to deliver the necessary code from your computer to the remote resource."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"script_folder = './sklearn-mnist'\n",
"os.makedirs(script_folder, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a training script\n",
"\n",
"To submit the job to the cluster, first create a training script. Run the following code to create the training script called `train.py` in the directory you just created. This training adds a regularization rate to the training algorithm, so produces a slightly different model than the local version."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $script_folder/train.py\n",
"\n",
"import argparse\n",
"import os\n",
"import numpy as np\n",
"\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.externals import joblib\n",
"\n",
"from azureml.core import Run\n",
"from utils import load_data\n",
"\n",
"# let user feed in 2 parameters, the location of the data files (from datastore), and the regularization rate of the logistic regression model\n",
"parser = argparse.ArgumentParser()\n",
"parser.add_argument('--data-folder', type=str, dest='data_folder', help='data folder mounting point')\n",
"parser.add_argument('--regularization', type=float, dest='reg', default=0.01, help='regularization rate')\n",
"args = parser.parse_args()\n",
"\n",
"data_folder = os.path.join(args.data_folder, 'mnist')\n",
"print('Data folder:', data_folder)\n",
"\n",
"# load train and test set into numpy arrays\n",
"# note we scale the pixel intensity values to 0-1 (by dividing it with 255.0) so the model can converge faster.\n",
"X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0\n",
"X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0\n",
"y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)\n",
"y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)\n",
"print(X_train.shape, y_train.shape, X_test.shape, y_test.shape, sep = '\\n')\n",
"\n",
"# get hold of the current run\n",
"run = Run.get_context()\n",
"\n",
"print('Train a logistic regression model with regularizaion rate of', args.reg)\n",
"clf = LogisticRegression(C=1.0/args.reg, random_state=42)\n",
"clf.fit(X_train, y_train)\n",
"\n",
"print('Predict the test set')\n",
"y_hat = clf.predict(X_test)\n",
"\n",
"# calculate accuracy on the prediction\n",
"acc = np.average(y_hat == y_test)\n",
"print('Accuracy is', acc)\n",
"\n",
"run.log('regularization rate', np.float(args.reg))\n",
"run.log('accuracy', np.float(acc))\n",
"\n",
"os.makedirs('outputs', exist_ok=True)\n",
"# note file saved in the outputs folder is automatically uploaded into experiment record\n",
"joblib.dump(value=clf, filename='outputs/sklearn_mnist_model.pkl')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how the script gets data and saves models:\n",
"\n",
"+ The training script reads an argument to find the directory containing the data. When you submit the job later, you point to the datastore for this argument:\n",
"`parser.add_argument('--data-folder', type=str, dest='data_folder', help='data directory mounting point')`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"+ The training script saves your model into a directory named outputs. <br/>\n",
"`joblib.dump(value=clf, filename='outputs/sklearn_mnist_model.pkl')`<br/>\n",
"Anything written in this directory is automatically uploaded into your workspace. You'll access your model from this directory later in the tutorial."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The file `utils.py` is referenced from the training script to load the dataset correctly. Copy this script into the script folder so that it can be accessed along with the training script on the remote resource."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import shutil\n",
"shutil.copy('utils.py', script_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create an estimator\n",
"\n",
"An estimator object is used to submit the run. Create your estimator by running the following code to define:\n",
"\n",
"* The name of the estimator object, `est`\n",
"* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. \n",
"* The compute target. In this case you will use the AmlCompute you created\n",
"* The training script name, train.py\n",
"* Parameters required from the training script \n",
"* Python packages needed for training\n",
"\n",
"In this tutorial, this target is AmlCompute. All files in the script folder are uploaded into the cluster nodes for execution. The data_folder is set to use the datastore (`ds.as_mount()`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"configure estimator"
]
},
"outputs": [],
"source": [
"from azureml.train.estimator import Estimator\n",
"\n",
"script_params = {\n",
" '--data-folder': ds.as_mount(),\n",
" '--regularization': 0.8\n",
"}\n",
"\n",
"est = Estimator(source_directory=script_folder,\n",
" script_params=script_params,\n",
" compute_target=compute_target,\n",
" entry_script='train.py',\n",
" conda_packages=['scikit-learn'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the job to the cluster\n",
"\n",
"Run the experiment by submitting the estimator object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"amlcompute",
"scikit-learn"
]
},
"outputs": [],
"source": [
"run = exp.submit(config=est)\n",
"run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since the call is asynchronous, it returns a **Preparing** or **Running** state as soon as the job is started.\n",
"\n",
"## Monitor a remote run\n",
"\n",
"In total, the first run takes **approximately 10 minutes**. But for subsequent runs, as long as the script dependencies don't change, the same image is reused and hence the container start up time is much faster.\n",
"\n",
"Here is what's happening while you wait:\n",
"\n",
"- **Image creation**: A Docker image is created matching the Python environment specified by the estimator. The image is uploaded to the workspace. Image creation and uploading takes **about 5 minutes**. \n",
"\n",
" This stage happens once for each Python environment since the container is cached for subsequent runs. During image creation, logs are streamed to the run history. You can monitor the image creation progress using these logs.\n",
"\n",
"- **Scaling**: If the remote cluster requires more nodes to execute the run than currently available, additional nodes are added automatically. Scaling typically takes **about 5 minutes.**\n",
"\n",
"- **Running**: In this stage, the necessary scripts and files are sent to the compute target, then data stores are mounted/copied, then the entry_script is run. While the job is running, stdout and the ./logs directory are streamed to the run history. You can monitor the run's progress using these logs.\n",
"\n",
"- **Post-Processing**: The ./outputs directory of the run is copied over to the run history in your workspace so you can access these results.\n",
"\n",
"\n",
"You can check the progress of a running job in multiple ways. This tutorial uses a Jupyter widget as well as a `wait_for_completion` method. \n",
"\n",
"### Jupyter widget\n",
"\n",
"Watch the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"use notebook widget"
]
},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get log results upon completion\n",
"\n",
"Model training and monitoring happen in the background. Wait until the model has completed training before running more code. Use `wait_for_completion` to show when the model training is complete."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"amlcompute",
"scikit-learn"
]
},
"outputs": [],
"source": [
"run.wait_for_completion(show_output=False) # specify True for a verbose log"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Display run results\n",
"\n",
"You now have a model trained on a remote cluster. Retrieve the accuracy of the model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"get metrics"
]
},
"outputs": [],
"source": [
"print(run.get_metrics())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the next tutorial you will explore this model in more detail.\n",
"\n",
"## Register model\n",
"\n",
"The last step in the training script wrote the file `outputs/sklearn_mnist_model.pkl` in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.\n",
"\n",
"You can see files associated with that run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"print(run.get_file_names())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Register the model in the workspace so that you (or other collaborators) can later query, examine, and deploy this model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"register model from history"
]
},
"outputs": [],
"source": [
"# register model \n",
"model = run.register_model(model_name='sklearn_mnist', model_path='outputs/sklearn_mnist_model.pkl')\n",
"print(model.name, model.id, model.version, sep = '\\t')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"In this Azure Machine Learning tutorial, you used Python to:\n",
"\n",
"> * Set up your development environment\n",
"> * Access and examine the data\n",
"> * Train a simple logistic regression locally using the popular scikit-learn machine learning library\n",
"> * Train multiple models on a remote cluster\n",
"> * Review training details and register the best model\n",
"\n",
"You are ready to deploy this registered model using the instructions in the next part of the tutorial series:\n",
"\n",
"> [Tutorial 2 - Deploy models](img-classification-part2-deploy.ipynb)"
]
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
},
"msauthor": "sgilley"
},
"nbformat": 4,
"nbformat_minor": 2
}