mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 09:37:04 -05:00
482 lines
21 KiB
Plaintext
482 lines
21 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
|
"\n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Tutorial: Azure Machine Learning Quickstart\n",
|
|
"\n",
|
|
"In this tutorial, you learn how to quickly get started with Azure Machine Learning. Using a *compute instance* - a fully managed cloud-based VM that is pre-configured with the latest data science tools - you will train an image classification model using the CIFAR10 dataset.\n",
|
|
"\n",
|
|
"In this tutorial you will learn how to:\n",
|
|
"\n",
|
|
"* Create a compute instance and attach to a notebook\n",
|
|
"* Train an image classification model and log metrics\n",
|
|
"* Deploy the model\n",
|
|
"\n",
|
|
"## Prerequisites\n",
|
|
"\n",
|
|
"1. An Azure Machine Learning workspace\n",
|
|
"1. Familiar with the Python language and machine learning workflows.\n",
|
|
"\n",
|
|
"\n",
|
|
"## Create compute & attach to notebook\n",
|
|
"\n",
|
|
"To run this notebook you will need to create an Azure Machine Learning _compute instance_. The benefits of a compute instance over a local machine (e.g. laptop) or cloud VM are as follows:\n",
|
|
"\n",
|
|
"* It is a pre-configured with all the latest data science libaries (e.g. panads, scikit, TensorFlow, PyTorch) and tools (Jupyter, RStudio). In this tutorial we make extensive use of PyTorch, AzureML SDK, matplotlib and we do not need to install these components on a compute instance.\n",
|
|
"* Notebooks are seperate from the compute instance - this means that you can develop your notebook on a small VM size, and then seamlessly scale up (and/or use a GPU-enabled) the machine when needed to train a model.\n",
|
|
"* You can easily turn on/off the instance to control costs. \n",
|
|
"\n",
|
|
"To create compute, click on the + button at the top of the notebook viewer in Azure Machine Learning Studio:\n",
|
|
"\n",
|
|
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create.PNG\" width=\"500\"/>\n",
|
|
"\n",
|
|
"This will pop up the __New compute instance__ blade, provide a valid __Compute name__ (valid characters are upper and lower case letters, digits, and the - character). Then click on __Create__. \n",
|
|
"\n",
|
|
"It will take approximately 3 minutes for the compute to be ready. When the compute is ready you will see a green light next to the compute name at the top of the notebook viewer:\n",
|
|
"\n",
|
|
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create2.PNG\" width=\"500\"/>\n",
|
|
"\n",
|
|
"You will also notice that the notebook is attached to the __Python 3.6 - AzureML__ jupyter Kernel. Other kernels can be selected such as R. In addition, if you did have other instances you can switch to them by simply using the dropdown menu next to the Compute label.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Import Data\n",
|
|
"\n",
|
|
"For this tutorial, you will use the CIFAR10 dataset. It has the classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The images in CIFAR-10 three-channel color images of 32x32 pixels in size.\n",
|
|
"\n",
|
|
"The code cell below uses the PyTorch API to download the data to your compute instance, which should be quick (around 15 seconds). The data is divided into training and test sets.\n",
|
|
"\n",
|
|
"* **NOTE: The data is downloaded to the compute instance (in the `/tmp` directory) and not a durable cloud-based store like Azure Blob Storage or Azure Data Lake. This means if you delete the compute instance the data will be lost. The [getting started with Azure Machine Learning tutorial series](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) shows how to create an Azure Machine Learning *dataset*, which aids durability, versioning, and collaboration.**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1600881820920
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import torch\n",
|
|
"import torch.optim as optim\n",
|
|
"import torchvision\n",
|
|
"import torchvision.transforms as transforms\n",
|
|
"\n",
|
|
"transform = transforms.Compose(\n",
|
|
" [transforms.ToTensor(),\n",
|
|
" transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n",
|
|
"\n",
|
|
"trainset = torchvision.datasets.CIFAR10(root='/tmp/data', train=True,\n",
|
|
" download=True, transform=transform)\n",
|
|
"trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,\n",
|
|
" shuffle=True, num_workers=2)\n",
|
|
"\n",
|
|
"testset = torchvision.datasets.CIFAR10(root='/tmp/data', train=False,\n",
|
|
" download=True, transform=transform)\n",
|
|
"testloader = torch.utils.data.DataLoader(testset, batch_size=4,\n",
|
|
" shuffle=False, num_workers=2)\n",
|
|
"\n",
|
|
"classes = ('plane', 'car', 'bird', 'cat',\n",
|
|
" 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Take a look at the data\n",
|
|
"In the following cell, you have some python code that displays the first batch of 4 CIFAR10 images:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1600882160868
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import matplotlib.pyplot as plt\n",
|
|
"import numpy as np\n",
|
|
"\n",
|
|
"def imshow(img):\n",
|
|
" img = img / 2 + 0.5 # unnormalize\n",
|
|
" npimg = img.numpy()\n",
|
|
" plt.imshow(np.transpose(npimg, (1, 2, 0)))\n",
|
|
" plt.show()\n",
|
|
"\n",
|
|
"\n",
|
|
"# get some random training images\n",
|
|
"dataiter = iter(trainloader)\n",
|
|
"images, labels = dataiter.next()\n",
|
|
"\n",
|
|
"# show images\n",
|
|
"imshow(torchvision.utils.make_grid(images))\n",
|
|
"# print labels\n",
|
|
"print(' '.join('%5s' % classes[labels[j]] for j in range(4)))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Train model and log metrics\n",
|
|
"\n",
|
|
"In the directory `model` you will see a file called [model.py](./model/model.py) that defines the neural network architecture. The model is trained using the code below.\n",
|
|
"\n",
|
|
"* **Note: The model training take around 4 minutes to complete. The benefit of a compute instance is that the notebooks are separate from the compute - therefore you can easily switch to a different size/type of instance. For example, you could switch to run this training on a GPU-based compute instance if you had one provisioned. In the code below you can see that we have included `torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")`, which detects whether you are using a CPU or GPU machine.**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1600882387754
|
|
},
|
|
"tags": [
|
|
"local run"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from model.model import Net\n",
|
|
"from azureml.core import Experiment\n",
|
|
"from azureml.core import Workspace\n",
|
|
"\n",
|
|
"ws = Workspace.from_config()\n",
|
|
"\n",
|
|
"device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
|
|
"device\n",
|
|
"\n",
|
|
"exp = Experiment(workspace=ws, name=\"cifar10-experiment\")\n",
|
|
"run = exp.start_logging(snapshot_directory=None)\n",
|
|
"\n",
|
|
"# define convolutional network\n",
|
|
"net = Net()\n",
|
|
"net.to(device)\n",
|
|
"\n",
|
|
"# set up pytorch loss / optimizer\n",
|
|
"criterion = torch.nn.CrossEntropyLoss()\n",
|
|
"optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)\n",
|
|
"\n",
|
|
"run.log(\"learning rate\", 0.001)\n",
|
|
"run.log(\"momentum\", 0.9)\n",
|
|
"\n",
|
|
"# train the network\n",
|
|
"for epoch in range(1):\n",
|
|
" running_loss = 0.0\n",
|
|
" for i, data in enumerate(trainloader, 0):\n",
|
|
" # unpack the data\n",
|
|
" inputs, labels = data[0].to(device), data[1].to(device)\n",
|
|
"\n",
|
|
" # zero the parameter gradients\n",
|
|
" optimizer.zero_grad()\n",
|
|
"\n",
|
|
" # forward + backward + optimize\n",
|
|
" outputs = net(inputs)\n",
|
|
" loss = criterion(outputs, labels)\n",
|
|
" loss.backward()\n",
|
|
" optimizer.step()\n",
|
|
"\n",
|
|
" # print statistics\n",
|
|
" running_loss += loss.item()\n",
|
|
" if i % 2000 == 1999:\n",
|
|
" loss = running_loss / 2000\n",
|
|
" run.log(\"loss\", loss)\n",
|
|
" print(f'epoch={epoch + 1}, batch={i + 1:5}: loss {loss:.2f}')\n",
|
|
" running_loss = 0.0\n",
|
|
"\n",
|
|
"print('Finished Training')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Once you have executed the cell below you can view the metrics updating in real time in the Azure Machine Learning studio:\n",
|
|
"\n",
|
|
"1. Select **Experiments** (left-hand menu)\n",
|
|
"1. Select **cifar10-experiment**\n",
|
|
"1. Select **Run 1**\n",
|
|
"1. Select the **Metrics** Tab\n",
|
|
"\n",
|
|
"The metrics tab will display the following graph:\n",
|
|
"\n",
|
|
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/metrics-capture.PNG\" alt=\"dataset details\" width=\"500\"/>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Understand the code\n",
|
|
"\n",
|
|
"The code is based on the [Pytorch 60minute Blitz](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py) where we have also added a few additional lines of code to track the loss metric as the neural network trains.\n",
|
|
"\n",
|
|
"| Code | Description | \n",
|
|
"| ------------- | ---------- |\n",
|
|
"| `experiment = Experiment( ... )` | [Experiment](https://docs.microsoft.com/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py&preserve-view=true) provides a simple way to organize multiple runs under a single name. Later you can see how experiments make it easy to compare metrics between dozens of runs. |\n",
|
|
"| `run.log()` | This will log the metrics to Azure Machine Learning. |"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Version control models with the Model Registry\n",
|
|
"\n",
|
|
"You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. Azure Machine Learning supports any model that can be loaded through Python 3.\n",
|
|
"\n",
|
|
"The code below does:\n",
|
|
"\n",
|
|
"1. Saves the model on the compute instance\n",
|
|
"1. Uploads the model file to the run (if you look in the experiment on Azure Machine Learning studio you should see on the **Outputs + logs** tab the model has been saved in the run)\n",
|
|
"1. Registers the uploaded model file\n",
|
|
"1. Transitions the run to a completed state"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1600888071066
|
|
},
|
|
"tags": [
|
|
"register model from file"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Model\n",
|
|
"\n",
|
|
"PATH = 'cifar_net.pth'\n",
|
|
"torch.save(net.state_dict(), PATH)\n",
|
|
"\n",
|
|
"run.upload_file(name=PATH, path_or_stream=PATH)\n",
|
|
"model = run.register_model(model_name='cifar10-model', \n",
|
|
" model_path=PATH,\n",
|
|
" model_framework=Model.Framework.PYTORCH,\n",
|
|
" description='cifar10 model')\n",
|
|
" \n",
|
|
"run.complete()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### View model in the model registry\n",
|
|
"\n",
|
|
"You can see the stored model by navigating to **Models** in the left-hand menu bar of Azure Machine Learning Studio. Click on the **cifar10-model** and you can see the details of the model like the experiement run id that created the model."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Deploy the model\n",
|
|
"\n",
|
|
"The next cell deploys the model to an Azure Container Instance so that you can score data in real-time (Azure Machine Learning also provides mechanisms to do batch scoring). A real-time endpoint allows application developers to integrate machine learning into their apps.\n",
|
|
"\n",
|
|
"* **Note: The deployment takes around 3 minutes to complete.**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"deploy service",
|
|
"aci"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Environment, Model\n",
|
|
"from azureml.core.model import InferenceConfig\n",
|
|
"from azureml.core.webservice import AciWebservice\n",
|
|
"\n",
|
|
"environment = Environment.get(ws, \"AzureML-PyTorch-1.6-CPU\")\n",
|
|
"model = Model(ws, \"cifar10-model\")\n",
|
|
"\n",
|
|
"service_name = 'cifar-service'\n",
|
|
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
|
|
"aci_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
|
|
"\n",
|
|
"service = Model.deploy(workspace=ws,\n",
|
|
" name=service_name,\n",
|
|
" models=[model],\n",
|
|
" inference_config=inference_config,\n",
|
|
" deployment_config=aci_config,\n",
|
|
" overwrite=True)\n",
|
|
"service.wait_for_deployment(show_output=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Understand the code\n",
|
|
"\n",
|
|
"| Code | Description | \n",
|
|
"| ------------- | ---------- |\n",
|
|
"| `environment = Environment.get()` | [Environment](https://docs.microsoft.com/python/api/overview/azure/ml/?view=azure-ml-py#environment) specify the Python packages, environment variables, and software settings around your training and scoring scripts. In this case, you are using a *curated environment* that has all the packages to run PyTorch. |\n",
|
|
"| `inference_config = InferenceConfig()` | This specifies the inference (scoring) configuration for the deployment such as the script to use when scoring (see below) and on what environment. |\n",
|
|
"| `service = Model.deploy()` | Deploy the model. |\n",
|
|
"\n",
|
|
"The [*scoring script*](score.py) file is has two functions:\n",
|
|
"\n",
|
|
"1. an `init` function that executes once when the service starts - in this function you normally get the model from the registry and set global variables\n",
|
|
"1. a `run(data)` function that executes each time a call is made to the service. In this function, you normally deserialize the json, run a prediction and output the predicted result.\n",
|
|
"\n",
|
|
"\n",
|
|
"## Test the model service\n",
|
|
"\n",
|
|
"In the next cell, you get some unseen data from the test loader:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"dataiter = iter(testloader)\n",
|
|
"images, labels = dataiter.next()\n",
|
|
"\n",
|
|
"# print images\n",
|
|
"imshow(torchvision.utils.make_grid(images))\n",
|
|
"print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Finally, the next cell runs scores the above images using the deployed model service."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import json\n",
|
|
"\n",
|
|
"input_payload = json.dumps({\n",
|
|
" 'data': images.tolist()\n",
|
|
"})\n",
|
|
"\n",
|
|
"output = service.run(input_payload)\n",
|
|
"print(output)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Clean up resources\n",
|
|
"\n",
|
|
"To clean up the resources after this quickstart, firstly delete the Model service using:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"service.delete()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Next stop the compute instance by following these steps:\n",
|
|
"\n",
|
|
"1. Go to **Compute** in the left-hand menu of the Azure Machine Learning studio\n",
|
|
"1. Select your compute instance\n",
|
|
"1. Select **Stop**\n",
|
|
"\n",
|
|
"\n",
|
|
"**Important: The resources you created can be used as prerequisites to other Azure Machine Learning tutorials and how-to articles.** If you don't plan to use the resources you created, delete them, so you don't incur any charges:\n",
|
|
"\n",
|
|
"1. In the Azure portal, select **Resource groups** on the far left.\n",
|
|
"1. From the list, select the resource group you created.\n",
|
|
"1. Select **Delete resource group**.\n",
|
|
"1. Enter the resource group name. Then select **Delete**.\n",
|
|
"\n",
|
|
"You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Next Steps\n",
|
|
"\n",
|
|
"In this tutorial, you have seen how to run your machine learning code on a fully managed, pre-configured cloud-based VM called a *compute instance*. Having a compute instance for your development environment removes the burden of installing data science tooling and libraries (for example, Jupyter, PyTorch, TensorFlow, Scikit) and allows you to easily scale up/down the compute power (RAM, cores) since the notebooks are separated from the VM. \n",
|
|
"\n",
|
|
"It is often the case that once you have your machine learning code working in a development environment that you want to productionize this by running as a **_job_** - ideally on a schedule or trigger (for example, arrival of new data). To this end, we recommend that you follow [**the day 1 getting started with Azure Machine Learning tutorial**](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local). This day 1 tutorial is focussed on running jobs-based machine learning code in the cloud."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "samkemp"
|
|
}
|
|
],
|
|
"kernelspec": {
|
|
"display_name": "Python 3.6",
|
|
"language": "python",
|
|
"name": "python36"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.5"
|
|
},
|
|
"nteract": {
|
|
"version": "nteract-front-end@1.0.0"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 4
|
|
} |