Files
MachineLearningNotebooks/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb

497 lines
19 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License [2017] Zalando SE, https://tech.zalando.com"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Build a simple ML pipeline for image classification\n",
"\n",
"## Introduction\n",
"This tutorial shows how to train a simple deep neural network using the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset and Keras on Azure Machine Learning. Fashion-MNIST is a dataset of Zalando's article images\u00e2\u20ac\u201dconsisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes.\n",
"\n",
"Learn how to:\n",
"\n",
"> * Set up your development environment\n",
"> * Create the Fashion MNIST dataset\n",
"> * Create a machine learning pipeline to train a simple deep learning neural network on a remote cluster\n",
"> * Retrieve input datasets from the experiment and register the output model with datasets\n",
"\n",
"## Prerequisite:\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the latest version of AzureML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up your development environment\n",
"\n",
"All the setup for your development work can be accomplished in a Python notebook. Setup includes:\n",
"\n",
"* Importing Python packages\n",
"* Connecting to a workspace to enable communication between your local computer and remote resources\n",
"* Creating an experiment to track all your runs\n",
"* Creating a remote compute target to use for training\n",
"\n",
"### Import packages\n",
"\n",
"Import Python packages you need in this session. Also display the Azure Machine Learning SDK version."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import azureml.core\n",
"from azureml.core import Workspace, Dataset, Datastore, ComputeTarget, Experiment, ScriptRunConfig\n",
"from azureml.pipeline.steps import PythonScriptStep\n",
"from azureml.pipeline.core import Pipeline\n",
"# check core SDK version number\n",
"print(\"Azure ML SDK Version: \", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to workspace\n",
"\n",
"Create a workspace object from the existing workspace. `Workspace.from_config()` reads the file **config.json** and loads the details into an object named `workspace`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load workspace\n",
"workspace = Workspace.from_config()\n",
"print('Workspace name: ' + workspace.name, \n",
" 'Azure region: ' + workspace.location, \n",
" 'Subscription id: ' + workspace.subscription_id, \n",
" 'Resource group: ' + workspace.resource_group, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create experiment and a directory\n",
"\n",
"Create an experiment to track the runs in your workspace and a directory to deliver the necessary code from your computer to the remote resource."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# create an ML experiment\n",
"exp = Experiment(workspace=workspace, name='keras-mnist-fashion')\n",
"\n",
"# create a directory\n",
"script_folder = './keras-mnist-fashion'\n",
"os.makedirs(script_folder, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create or Attach existing compute resource\n",
"By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.\n",
"\n",
"**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# choose a name for your cluster\n",
"cluster_name = \"gpu-cluster\"\n",
"\n",
"try:\n",
" compute_target = ComputeTarget(workspace=workspace, name=cluster_name)\n",
" print('Found existing compute target')\n",
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
" max_nodes=4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(workspace, cluster_name, compute_config)\n",
"\n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it uses the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Fashion MNIST dataset\n",
"\n",
"By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_urls = ['https://data4mldemo6150520719.blob.core.windows.net/demo/mnist-fashion']\n",
"fashion_ds = Dataset.File.from_files(data_urls)\n",
"\n",
"# list the files referenced by fashion_ds\n",
"fashion_ds.to_path()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build 2-step ML pipeline\n",
"\n",
"The [Azure Machine Learning Pipeline](https://docs.microsoft.com/azure/machine-learning/service/concept-ml-pipelines) enables data scientists to create and manage multiple simple and complex workflows concurrently. A typical pipeline would have multiple tasks to prepare data, train, deploy and evaluate models. Individual steps in the pipeline can make use of diverse compute options (for example: CPU for data preparation and GPU for training) and languages. [Learn More](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines)\n",
"\n",
"\n",
"### Step 1: data preparation\n",
"\n",
"In step one, we will load the image and labels from Fashion MNIST dataset into mnist_train.csv and mnist_test.csv\n",
"\n",
"Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255. Both mnist_train.csv and mnist_test.csv contain 785 columns. The first column consists of the class labels, which represent the article of clothing. The rest of the columns contain the pixel-values of the associated image."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Intermediate data (or output of a step) is represented by a `OutputFileDatasetConfig` object. preprared_fashion_ds is produced as the output of step 1, and used as the input of step 2. `OutputFileDatasetConfig` introduces a data dependency between steps, and creates an implicit execution order in the pipeline. You can register a `OutputFileDatasetConfig` as a dataset and version the output data automatically."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.data import OutputFileDatasetConfig\n",
"\n",
"# learn more about the output config\n",
"help(OutputFileDatasetConfig)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# write output to datastore under folder `outputdataset` and register it as a dataset after the experiment completes\n",
"# make sure the service principal in your datastore has blob data contributor role in order to write data back\n",
"datastore=workspace.get_default_datastore()\n",
"prepared_fashion_ds = OutputFileDatasetConfig(destination=(datastore, 'outputdataset/{run-id}')).register_on_complete(name='prepared_fashion_ds')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A **PythonScriptStep** is a basic, built-in step to run a Python Script on a compute target. It takes a script name and optionally other parameters like arguments for the script, compute target, inputs and outputs. If no compute target is specified, default compute target for the workspace is used. You can also use a [**RunConfiguration**](https://docs.microsoft.com/python/api/azureml-core/azureml.core.runconfiguration?view=azure-ml-py) to specify requirements for the PythonScriptStep, such as conda dependencies and docker image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"prep_step = PythonScriptStep(name='prepare step',\n",
" script_name=\"prepare.py\",\n",
" # mount fashion_ds dataset to the compute_target\n",
" arguments=[fashion_ds.as_named_input('fashion_ds').as_mount(), prepared_fashion_ds],\n",
" source_directory=script_folder,\n",
" compute_target=compute_target,\n",
" allow_reuse=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 2: train CNN with Keras\n",
"\n",
"Next, construct a ScriptRunConfig to configure the training run that trains a CNN model using Keras. It takes a dataset as the input."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile conda_dependencies.yml\n",
"\n",
"dependencies:\n",
"- python=3.6.2\n",
"- pip:\n",
" - azureml-defaults\n",
" - keras\n",
" - tensorflow\n",
" - numpy\n",
" - scikit-learn\n",
" - pandas\n",
" - matplotlib"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
"\n",
"keras_env = Environment.from_conda_specification(name = 'keras-env', file_path = './conda_dependencies.yml')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_src = ScriptRunConfig(source_directory=script_folder,\n",
" script='train.py',\n",
" compute_target=compute_target,\n",
" environment=keras_env)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Pass the run configuration details into the PythonScriptStep."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_step = PythonScriptStep(name='train step',\n",
" arguments=[prepared_fashion_ds.read_delimited_files().as_input(name='prepared_fashion_ds')],\n",
" source_directory=train_src.source_directory,\n",
" script_name=train_src.script,\n",
" runconfig=train_src.run_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the pipeline\n",
"Once we have the steps (or steps collection), we can build the [pipeline](https://docs.microsoft.com/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py).\n",
"\n",
"A pipeline is created with a list of steps and a workspace. Submit a pipeline using `submit`. When submit is called, a [PipelineRun](https://docs.microsoft.com/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinerun?view=azure-ml-py) is created which in turn creates [StepRun](https://docs.microsoft.com/python/api/azureml-pipeline-core/azureml.pipeline.core.steprun?view=azure-ml-py) objects for each step in the workflow."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# build pipeline & run experiment\n",
"pipeline = Pipeline(workspace, steps=[prep_step, train_step])\n",
"run = exp.submit(pipeline)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Monitor the PipelineRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"inputHidden": false,
"outputHidden": false
},
"outputs": [],
"source": [
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.find_step_run('train step')[0].get_metrics()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the input dataset and the output model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Azure Machine Learning dataset makes it easy to trace how your data is used in ML. [Learn More](https://docs.microsoft.com/azure/machine-learning/service/how-to-version-track-datasets#track-datasets-in-experiments)<br>\n",
"For each Machine Learning experiment, you can easily trace the datasets used as the input through `Run` object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get input datasets\n",
"prep_step = run.find_step_run('prepare step')[0]\n",
"inputs = prep_step.get_details()['inputDatasets']\n",
"input_dataset = inputs[0]['dataset']\n",
"\n",
"# list the files referenced by input_dataset\n",
"input_dataset.to_path()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Register the input Fashion MNIST dataset with the workspace so that you can reuse it in other experiments or share it with your colleagues who have access to your workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fashion_ds = input_dataset.register(workspace = workspace,\n",
" name = 'fashion_ds',\n",
" description = 'image and label files from fashion mnist',\n",
" create_new_version = True)\n",
"fashion_ds"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Register the output model with dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.find_step_run('train step')[0].register_model(model_name = 'keras-model', model_path = 'outputs/model/', \n",
" datasets =[('train test data',fashion_ds)])"
]
}
],
"metadata": {
"authors": [
{
"name": "sihhu"
}
],
"category": "tutorial",
"compute": [
"Remote"
],
"datasets": [
"Fashion MNIST"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"framework": [
"Azure ML"
],
"friendly_name": "Datasets with ML Pipeline",
"index_order": 1,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
},
"nteract": {
"version": "nteract-front-end@1.0.0"
},
"star_tag": [
"featured"
],
"tags": [
"Dataset",
"Pipeline",
"Estimator",
"ScriptRun"
],
"task": "Train"
},
"nbformat": 4,
"nbformat_minor": 2
}