mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 09:37:04 -05:00
785 lines
32 KiB
Plaintext
785 lines
32 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Handwritten Digit Classification (MNIST) using ONNX Runtime on Azure ML\n",
|
|
"\n",
|
|
"This example shows how to deploy an image classification neural network using the Modified National Institute of Standards and Technology ([MNIST](http://yann.lecun.com/exdb/mnist/)) dataset and Open Neural Network eXchange format ([ONNX](http://aka.ms/onnxdocarticle)) on the Azure Machine Learning platform. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing number from 0 to 9. This tutorial will show you how to deploy a MNIST model from the [ONNX model zoo](https://github.com/onnx/models), use it to make predictions using ONNX Runtime Inference, and deploy it as a web service in Azure.\n",
|
|
"\n",
|
|
"Throughout this tutorial, we will be referring to ONNX, a neural network exchange format used to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools (CNTK, PyTorch, Caffe, MXNet, TensorFlow) and choose the combination that is best for them. ONNX is developed and supported by a community of partners including Microsoft AI, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai) and [open source files](https://github.com/onnx).\n",
|
|
"\n",
|
|
"[ONNX Runtime](https://aka.ms/onnxruntime-python) is the runtime engine that enables evaluation of trained machine learning (Traditional ML and Deep Learning) models with high performance and low resource utilization.\n",
|
|
"\n",
|
|
"#### Tutorial Objectives:\n",
|
|
"\n",
|
|
"- Describe the MNIST dataset and pretrained Convolutional Neural Net ONNX model, stored in the ONNX model zoo.\n",
|
|
"- Deploy and run the pretrained MNIST ONNX model on an Azure Machine Learning instance\n",
|
|
"- Predict labels for test set data points in the cloud using ONNX Runtime and Azure ML"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Prerequisites\n",
|
|
"\n",
|
|
"### 1. Install Azure ML SDK and create a new workspace\n",
|
|
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
|
|
"\n",
|
|
"### 2. Install additional packages needed for this tutorial notebook\n",
|
|
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed. \n",
|
|
"\n",
|
|
"```sh\n",
|
|
"(myenv) $ pip install matplotlib onnx opencv-python\n",
|
|
"```\n",
|
|
"\n",
|
|
"**Debugging tip**: Make sure that you run the \"jupyter notebook\" command to launch this notebook after activating your virtual environment. Choose the respective Python kernel for your new virtual environment using the `Kernel > Change Kernel` menu above. If you have completed the steps correctly, the upper right corner of your screen should state `Python [conda env:myenv]` instead of `Python [default]`.\n",
|
|
"\n",
|
|
"### 3. Download sample data and pre-trained ONNX model from ONNX Model Zoo.\n",
|
|
"\n",
|
|
"In the following lines of code, we download [the trained ONNX MNIST model and corresponding test data](https://github.com/onnx/models/tree/master/vision/classification/mnist) and place them in the same folder as this tutorial notebook. For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# urllib is a built-in Python library to download files from URLs\n",
|
|
"\n",
|
|
"# Objective: retrieve the latest version of the ONNX MNIST model files from the\n",
|
|
"# ONNX Model Zoo and save it in the same folder as this tutorial\n",
|
|
"\n",
|
|
"import urllib.request\n",
|
|
"\n",
|
|
"onnx_model_url = \"https://www.cntk.ai/OnnxModels/mnist/opset_7/mnist.tar.gz\"\n",
|
|
"\n",
|
|
"urllib.request.urlretrieve(onnx_model_url, filename=\"mnist.tar.gz\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# the ! magic command tells our jupyter notebook kernel to run the following line of \n",
|
|
"# code from the command line instead of the notebook kernel\n",
|
|
"\n",
|
|
"# We use tar and xvcf to unzip the files we just retrieved from the ONNX model zoo\n",
|
|
"\n",
|
|
"!tar xvzf mnist.tar.gz"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Deploy a VM with your ONNX model in the Cloud\n",
|
|
"\n",
|
|
"### Load Azure ML workspace\n",
|
|
"\n",
|
|
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Check core SDK version number\n",
|
|
"import azureml.core\n",
|
|
"\n",
|
|
"print(\"SDK version:\", azureml.core.VERSION)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Workspace\n",
|
|
"\n",
|
|
"ws = Workspace.from_config()\n",
|
|
"print(ws.name, ws.resource_group, ws.location, sep = '\\n')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Registering your model with Azure ML"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"model_dir = \"mnist\" # replace this with the location of your model files\n",
|
|
"\n",
|
|
"# leave as is if it's in the same folder as this notebook"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.model import Model\n",
|
|
"\n",
|
|
"model = Model.register(workspace = ws,\n",
|
|
" model_path = model_dir + \"/\" + \"model.onnx\",\n",
|
|
" model_name = \"mnist_1\",\n",
|
|
" tags = {\"onnx\": \"demo\"},\n",
|
|
" description = \"MNIST image classification CNN from ONNX Model Zoo\",)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Optional: Displaying your registered models\n",
|
|
"\n",
|
|
"This step is not required, so feel free to skip it."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"models = ws.models\n",
|
|
"for name, m in models.items():\n",
|
|
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"nbpresent": {
|
|
"id": "c3f2f57c-7454-4d3e-b38d-b0946cf066ea"
|
|
}
|
|
},
|
|
"source": [
|
|
"### ONNX MNIST Model Methodology\n",
|
|
"\n",
|
|
"The image classification model we are using is pre-trained using Microsoft's deep learning cognitive toolkit, [CNTK](https://github.com/Microsoft/CNTK), from the [ONNX model zoo](http://github.com/onnx/models). The model zoo has many other models that can be deployed on cloud providers like AzureML without any additional training. To ensure that our cloud deployed model works, we use testing data from the famous MNIST data set, provided as part of the [trained MNIST model](https://github.com/onnx/models/tree/master/vision/classification/mnist) in the ONNX model zoo.\n",
|
|
"\n",
|
|
"***Input: Handwritten Images from MNIST Dataset***\n",
|
|
"\n",
|
|
"***Task: Classify each MNIST image into an appropriate digit***\n",
|
|
"\n",
|
|
"***Output: Digit prediction for input image***\n",
|
|
"\n",
|
|
"Run the cell below to look at some of the sample images from the MNIST dataset that we used to train this ONNX model. Remember, once the application is deployed in Azure ML, you can use your own images as input for the model to classify!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# for images and plots in this notebook\n",
|
|
"import matplotlib.pyplot as plt \n",
|
|
"from IPython.display import Image\n",
|
|
"\n",
|
|
"# display images inline\n",
|
|
"%matplotlib inline"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"Image(url=\"http://3.bp.blogspot.com/_UpN7DfJA0j4/TJtUBWPk0SI/AAAAAAAAABY/oWPMtmqJn3k/s1600/mnist_originals.png\", width=200, height=200)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Specify our Score and Environment Files"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file. You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n",
|
|
"\n",
|
|
"### Write Score File\n",
|
|
"\n",
|
|
"A score file is what tells our Azure cloud service what to do. After initializing our model using azureml.core.model, we start an ONNX Runtime inference session to evaluate the data passed in on our function calls."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%%writefile score.py\n",
|
|
"import json\n",
|
|
"import numpy as np\n",
|
|
"import onnxruntime\n",
|
|
"import sys\n",
|
|
"import os\n",
|
|
"from azureml.core.model import Model\n",
|
|
"import time\n",
|
|
"\n",
|
|
"\n",
|
|
"def init():\n",
|
|
" global session, input_name, output_name\n",
|
|
" model = Model.get_model_path(model_name = 'mnist_1')\n",
|
|
" session = onnxruntime.InferenceSession(model, None)\n",
|
|
" input_name = session.get_inputs()[0].name\n",
|
|
" output_name = session.get_outputs()[0].name \n",
|
|
" \n",
|
|
"\n",
|
|
"def preprocess(input_data_json):\n",
|
|
" # convert the JSON data into the tensor input\n",
|
|
" return np.array(json.loads(input_data_json)['data']).astype('float32')\n",
|
|
"\n",
|
|
"def postprocess(result):\n",
|
|
" # We use argmax to pick the highest confidence label\n",
|
|
" return int(np.argmax(np.array(result).squeeze(), axis=0))\n",
|
|
" \n",
|
|
"def run(input_data):\n",
|
|
"\n",
|
|
" try:\n",
|
|
" # load in our data, convert to readable format\n",
|
|
" data = preprocess(input_data)\n",
|
|
" \n",
|
|
" # start timer\n",
|
|
" start = time.time()\n",
|
|
" \n",
|
|
" r = session.run([output_name], {input_name: data})\n",
|
|
" \n",
|
|
" #end timer\n",
|
|
" end = time.time()\n",
|
|
" \n",
|
|
" result = postprocess(r)\n",
|
|
" result_dict = {\"result\": result,\n",
|
|
" \"time_in_sec\": end - start}\n",
|
|
" except Exception as e:\n",
|
|
" result_dict = {\"error\": str(e)}\n",
|
|
" \n",
|
|
" return result_dict\n",
|
|
"\n",
|
|
"def choose_class(result_prob):\n",
|
|
" \"\"\"We use argmax to determine the right label to choose from our output\"\"\"\n",
|
|
" return int(np.argmax(result_prob, axis=0))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Write Environment File\n",
|
|
"\n",
|
|
"This step creates a YAML environment file that specifies which dependencies we would like to see in our Linux Virtual Machine."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
|
"\n",
|
|
"myenv = CondaDependencies.create(pip_packages=[\"numpy\", \"onnxruntime\", \"azureml-core\"])\n",
|
|
"\n",
|
|
"with open(\"myenv.yml\",\"w\") as f:\n",
|
|
" f.write(myenv.serialize_to_string())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Create Inference Configuration"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.model import InferenceConfig\n",
|
|
"\n",
|
|
"inference_config = InferenceConfig(runtime= \"python\", \n",
|
|
" entry_script=\"score.py\",\n",
|
|
" extra_docker_file_steps = \"Dockerfile\",\n",
|
|
" conda_file=\"myenv.yml\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Deploy the model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.webservice import AciWebservice\n",
|
|
"\n",
|
|
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
|
" memory_gb = 1, \n",
|
|
" tags = {'demo': 'onnx'}, \n",
|
|
" description = 'ONNX for mnist model')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The following cell will likely take a few minutes to run."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.webservice import Webservice\n",
|
|
"\n",
|
|
"aci_service_name = 'onnx-demo-mnist'\n",
|
|
"print(\"Service\", aci_service_name)\n",
|
|
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
|
"aci_service.wait_for_deployment(True)\n",
|
|
"print(aci_service.state)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"if aci_service.state != 'Healthy':\n",
|
|
" # run this command for debugging.\n",
|
|
" print(aci_service.get_logs())\n",
|
|
"\n",
|
|
" # If your deployment fails, make sure to delete your aci_service or rename your service before trying again!\n",
|
|
" # aci_service.delete()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Success!\n",
|
|
"\n",
|
|
"If you've made it this far, you've deployed a working VM with a handwritten digit classifier running in the cloud using Azure ML. Congratulations!\n",
|
|
"\n",
|
|
"You can get the URL for the webservice with the code below. Let's now see how well our model deals with our test images."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"print(aci_service.scoring_uri)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Testing and Evaluation\n",
|
|
"\n",
|
|
"### Load Test Data\n",
|
|
"\n",
|
|
"These are already in your directory from your ONNX model download (from the model zoo).\n",
|
|
"\n",
|
|
"Notice that our Model Zoo files have a .pb extension. This is because they are [protobuf files (Protocol Buffers)](https://developers.google.com/protocol-buffers/docs/pythontutorial), so we need to read in our data through our ONNX TensorProto reader into a format we can work with, like numerical arrays."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# to manipulate our arrays\n",
|
|
"import numpy as np \n",
|
|
"\n",
|
|
"# read in test data protobuf files included with the model\n",
|
|
"import onnx\n",
|
|
"from onnx import numpy_helper\n",
|
|
"\n",
|
|
"# to use parsers to read in our model/data\n",
|
|
"import json\n",
|
|
"import os\n",
|
|
"\n",
|
|
"test_inputs = []\n",
|
|
"test_outputs = []\n",
|
|
"\n",
|
|
"# read in 3 testing images from .pb files\n",
|
|
"test_data_size = 3\n",
|
|
"\n",
|
|
"for i in np.arange(test_data_size):\n",
|
|
" input_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(i), 'input_0.pb')\n",
|
|
" output_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(i), 'output_0.pb')\n",
|
|
" \n",
|
|
" # convert protobuf tensors to np arrays using the TensorProto reader from ONNX\n",
|
|
" tensor = onnx.TensorProto()\n",
|
|
" with open(input_test_data, 'rb') as f:\n",
|
|
" tensor.ParseFromString(f.read())\n",
|
|
" \n",
|
|
" input_data = numpy_helper.to_array(tensor)\n",
|
|
" test_inputs.append(input_data)\n",
|
|
" \n",
|
|
" with open(output_test_data, 'rb') as f:\n",
|
|
" tensor.ParseFromString(f.read())\n",
|
|
" \n",
|
|
" output_data = numpy_helper.to_array(tensor)\n",
|
|
" test_outputs.append(output_data)\n",
|
|
" \n",
|
|
"if len(test_inputs) == test_data_size:\n",
|
|
" print('Test data loaded successfully.')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"nbpresent": {
|
|
"id": "c3f2f57c-7454-4d3e-b38d-b0946cf066ea"
|
|
}
|
|
},
|
|
"source": [
|
|
"### Show some sample images\n",
|
|
"We use `matplotlib` to plot 3 test images from the dataset."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"nbpresent": {
|
|
"id": "396d478b-34aa-4afa-9898-cdce8222a516"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"plt.figure(figsize = (16, 6))\n",
|
|
"for test_image in np.arange(3):\n",
|
|
" plt.subplot(1, 15, test_image+1)\n",
|
|
" plt.axhline('')\n",
|
|
" plt.axvline('')\n",
|
|
" plt.imshow(test_inputs[test_image].reshape(28, 28), cmap = plt.cm.Greys)\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Run evaluation / prediction"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"plt.figure(figsize = (16, 6), frameon=False)\n",
|
|
"plt.subplot(1, 8, 1)\n",
|
|
"\n",
|
|
"plt.text(x = 0, y = -30, s = \"True Label: \", fontsize = 13, color = 'black')\n",
|
|
"plt.text(x = 0, y = -20, s = \"Result: \", fontsize = 13, color = 'black')\n",
|
|
"plt.text(x = 0, y = -10, s = \"Inference Time: \", fontsize = 13, color = 'black')\n",
|
|
"plt.text(x = 3, y = 14, s = \"Model Input\", fontsize = 12, color = 'black')\n",
|
|
"plt.text(x = 6, y = 18, s = \"(28 x 28)\", fontsize = 12, color = 'black')\n",
|
|
"plt.imshow(np.ones((28,28)), cmap=plt.cm.Greys) \n",
|
|
"\n",
|
|
"\n",
|
|
"for i in np.arange(test_data_size):\n",
|
|
" \n",
|
|
" input_data = json.dumps({'data': test_inputs[i].tolist()})\n",
|
|
" \n",
|
|
" # predict using the deployed model\n",
|
|
" r = aci_service.run(input_data)\n",
|
|
" \n",
|
|
" if \"error\" in r:\n",
|
|
" print(r['error'])\n",
|
|
" break\n",
|
|
" \n",
|
|
" result = r['result']\n",
|
|
" time_ms = np.round(r['time_in_sec'] * 1000, 2)\n",
|
|
" \n",
|
|
" ground_truth = int(np.argmax(test_outputs[i]))\n",
|
|
" \n",
|
|
" # compare actual value vs. the predicted values:\n",
|
|
" plt.subplot(1, 8, i+2)\n",
|
|
" plt.axhline('')\n",
|
|
" plt.axvline('')\n",
|
|
"\n",
|
|
" # use different color for misclassified sample\n",
|
|
" font_color = 'red' if ground_truth != result else 'black'\n",
|
|
" clr_map = plt.cm.gray if ground_truth != result else plt.cm.Greys\n",
|
|
"\n",
|
|
" # ground truth labels are in blue\n",
|
|
" plt.text(x = 10, y = -30, s = ground_truth, fontsize = 18, color = 'blue')\n",
|
|
" \n",
|
|
" # predictions are in black if correct, red if incorrect\n",
|
|
" plt.text(x = 10, y = -20, s = result, fontsize = 18, color = font_color)\n",
|
|
" plt.text(x = 5, y = -10, s = str(time_ms) + ' ms', fontsize = 14, color = font_color)\n",
|
|
"\n",
|
|
" \n",
|
|
" plt.imshow(test_inputs[i].reshape(28, 28), cmap = clr_map)\n",
|
|
"\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Try classifying your own images!\n",
|
|
"\n",
|
|
"Create your own handwritten image and pass it into the model."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Preprocessing functions take your image and format it so it can be passed\n",
|
|
"# as input into our ONNX model\n",
|
|
"\n",
|
|
"import cv2\n",
|
|
"\n",
|
|
"def rgb2gray(rgb):\n",
|
|
" \"\"\"Convert the input image into grayscale\"\"\"\n",
|
|
" return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n",
|
|
"\n",
|
|
"def resize_img(img_to_resize):\n",
|
|
" \"\"\"Resize image to MNIST model input dimensions\"\"\"\n",
|
|
" r_img = cv2.resize(img_to_resize, dsize=(28, 28), interpolation=cv2.INTER_AREA)\n",
|
|
" r_img.resize((1, 1, 28, 28))\n",
|
|
" return r_img\n",
|
|
"\n",
|
|
"def preprocess(img_to_preprocess):\n",
|
|
" \"\"\"Resize input images and convert them to grayscale.\"\"\"\n",
|
|
" if img_to_preprocess.shape == (28, 28):\n",
|
|
" img_to_preprocess.resize((1, 1, 28, 28))\n",
|
|
" return img_to_preprocess\n",
|
|
" \n",
|
|
" grayscale = rgb2gray(img_to_preprocess)\n",
|
|
" processed_img = resize_img(grayscale)\n",
|
|
" return processed_img"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Replace this string with your own path/test image\n",
|
|
"# Make sure your image is square and the dimensions are equal (i.e. 100 * 100 pixels or 28 * 28 pixels)\n",
|
|
"\n",
|
|
"# Any PNG or JPG image file should work\n",
|
|
"\n",
|
|
"your_test_image = \"<path to file>\"\n",
|
|
"\n",
|
|
"# e.g. your_test_image = \"C:/Users/vinitra.swamy/Pictures/handwritten_digit.png\"\n",
|
|
"\n",
|
|
"import matplotlib.image as mpimg\n",
|
|
"\n",
|
|
"if your_test_image != \"<path to file>\":\n",
|
|
" img = mpimg.imread(your_test_image)\n",
|
|
" plt.subplot(1,3,1)\n",
|
|
" plt.imshow(img, cmap = plt.cm.Greys)\n",
|
|
" print(\"Old Dimensions: \", img.shape)\n",
|
|
" img = preprocess(img)\n",
|
|
" print(\"New Dimensions: \", img.shape)\n",
|
|
"else:\n",
|
|
" img = None"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"if img is None:\n",
|
|
" print(\"Add the path for your image data.\")\n",
|
|
"else:\n",
|
|
" input_data = json.dumps({'data': img.tolist()})\n",
|
|
"\n",
|
|
" try:\n",
|
|
" r = aci_service.run(input_data)\n",
|
|
" result = r['result']\n",
|
|
" time_ms = np.round(r['time_in_sec'] * 1000, 2)\n",
|
|
" except KeyError as e:\n",
|
|
" print(str(e))\n",
|
|
"\n",
|
|
" plt.figure(figsize = (16, 6))\n",
|
|
" plt.subplot(1, 15,1)\n",
|
|
" plt.axhline('')\n",
|
|
" plt.axvline('')\n",
|
|
" plt.text(x = -100, y = -20, s = \"Model prediction: \", fontsize = 14)\n",
|
|
" plt.text(x = -100, y = -10, s = \"Inference time: \", fontsize = 14)\n",
|
|
" plt.text(x = 0, y = -20, s = str(result), fontsize = 14)\n",
|
|
" plt.text(x = 0, y = -10, s = str(time_ms) + \" ms\", fontsize = 14)\n",
|
|
" plt.text(x = -100, y = 14, s = \"Input image: \", fontsize = 14)\n",
|
|
" plt.imshow(img.reshape(28, 28), cmap = plt.cm.gray) "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Optional: How does our ONNX MNIST model work? \n",
|
|
"#### A brief explanation of Convolutional Neural Networks\n",
|
|
"\n",
|
|
"A [convolutional neural network](https://en.wikipedia.org/wiki/Convolutional_neural_network) (CNN, or ConvNet) is a type of [feed-forward](https://en.wikipedia.org/wiki/Feedforward_neural_network) artificial neural network made up of neurons that have learnable weights and biases. The CNNs take advantage of the spatial nature of the data. In nature, we perceive different objects by their shapes, size and colors. For example, objects in a natural scene are typically edges, corners/vertices (defined by two of more edges), color patches etc. These primitives are often identified using different detectors (e.g., edge detection, color detector) or combination of detectors interacting to facilitate image interpretation (object classification, region of interest detection, scene description etc.) in real world vision related tasks. These detectors are also known as filters. Convolution is a mathematical operator that takes an image and a filter as input and produces a filtered output (representing say edges, corners, or colors in the input image). \n",
|
|
"\n",
|
|
"Historically, these filters are a set of weights that were often hand crafted or modeled with mathematical functions (e.g., [Gaussian](https://en.wikipedia.org/wiki/Gaussian_filter) / [Laplacian](http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm) / [Canny](https://en.wikipedia.org/wiki/Canny_edge_detector) filter). The filter outputs are mapped through non-linear activation functions mimicking human brain cells called [neurons](https://en.wikipedia.org/wiki/Neuron). Popular deep CNNs or ConvNets (such as [AlexNet](https://en.wikipedia.org/wiki/AlexNet), [VGG](https://arxiv.org/abs/1409.1556), [Inception](http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf), [ResNet](https://arxiv.org/pdf/1512.03385v1.pdf)) that are used for various [computer vision](https://en.wikipedia.org/wiki/Computer_vision) tasks have many of these architectural primitives (inspired from biology). \n",
|
|
"\n",
|
|
"### Convolution Layer\n",
|
|
"\n",
|
|
"A convolution layer is a set of filters. Each filter is defined by a weight (**W**) matrix, and bias ($b$).\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"These filters are scanned across the image performing the dot product between the weights and corresponding input value ($x$). The bias value is added to the output of the dot product and the resulting sum is optionally mapped through an activation function. This process is illustrated in the following animation."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"Image(url=\"https://www.cntk.ai/jup/cntk103d_conv2d_final.gif\", width= 200)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Model Description\n",
|
|
"\n",
|
|
"The MNIST model from the ONNX Model Zoo uses maxpooling to update the weights in its convolutions, summarized by the graphic below. You can see the entire workflow of our pre-trained model in the following image, with our input images and our output probabilities of each of our 10 labels. If you're interested in exploring the logic behind creating a Deep Learning model further, please look at the [training tutorial for our ONNX MNIST Convolutional Neural Network](https://github.com/Microsoft/CNTK/blob/master/Tutorials/CNTK_103D_MNIST_ConvolutionalNeuralNetwork.ipynb). "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Max-Pooling for Convolutional Neural Nets\n",
|
|
"\n",
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Pre-Trained Model Architecture\n",
|
|
"\n",
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# remember to delete your service after you are done using it!\n",
|
|
"\n",
|
|
"# aci_service.delete()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Conclusion\n",
|
|
"\n",
|
|
"Congratulations!\n",
|
|
"\n",
|
|
"In this tutorial, you have:\n",
|
|
"- familiarized yourself with ONNX Runtime inference and the pretrained models in the ONNX model zoo\n",
|
|
"- understood a state-of-the-art convolutional neural net image classification model (MNIST in ONNX) and deployed it in Azure ML cloud\n",
|
|
"- ensured that your deep learning model is working perfectly (in the cloud) on test data, and checked it against some of your own!\n",
|
|
"\n",
|
|
"Next steps:\n",
|
|
"- Check out another interesting application based on a Microsoft Research computer vision paper that lets you set up a [facial emotion recognition model](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model in an Azure ML virtual machine.\n",
|
|
"- Contribute to our [open source ONNX repository on github](http://github.com/onnx/onnx) and/or add to our [ONNX model zoo](http://github.com/onnx/models)"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "viswamy"
|
|
}
|
|
],
|
|
"kernelspec": {
|
|
"display_name": "Python 3.6",
|
|
"language": "python",
|
|
"name": "python36"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.5"
|
|
},
|
|
"msauthor": "vinitra.swamy"
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
} |