diff --git a/onnx/README.md b/onnx/README.md new file mode 100644 index 00000000..0b6d7890 --- /dev/null +++ b/onnx/README.md @@ -0,0 +1,23 @@ +# ONNX Runtime on Azure Machine Learning (AML) + +These tutorials show how to deploy pretrained [ONNX](http://onnx.ai) models on Azure virtual machines using [ONNX Runtime](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx) for inference. By the end of the tutorial, you will deploy a state-of-the-art deep learning model on a virtual machine in Azure Machine Learning, using ONNX Runtime for Inference. You can ping the model with your own images to be analyzed! + +## Tutorials +- [Handwritten Digit Classification (MNIST) using ONNX Runtime on AzureML](https://github.com/Azure/MachineLearningNotebooks/blob/master/onnx/onnx-inference-mnist.ipynb) +- [Facial Expression Recognition using ONNX Runtime on AzureML](https://github.com/Azure/MachineLearningNotebooks/blob/master/onnx/onnx-inference-emotion-recognition.ipynb) + +## Documentation +- [ONNX Runtime Python API Documentation](http://aka.ms/onnxruntime-python) +- [Azure Machine Learning API Documentation](http://aka.ms/aml-docs) + +## Related Articles +- [Building and Deploying ONNX Runtime Models](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx) +- [Azure AI – Making AI Real for Business](https://aka.ms/aml-blog-overview) +- [What’s new in Azure Machine Learning](https://aka.ms/aml-blog-whats-new) + + +## License + +Copyright (c) Microsoft Corporation. All rights reserved. +Licensed under the MIT License. + diff --git a/onnx/onnx-inference-emotion-recognition.ipynb b/onnx/onnx-inference-emotion-recognition.ipynb index d5870fd7..68787744 100644 --- a/onnx/onnx-inference-emotion-recognition.ipynb +++ b/onnx/onnx-inference-emotion-recognition.ipynb @@ -12,7 +12,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Facial Expression Recognition using ONNX Runtime on AzureML\n", + "# Facial Expression Recognition (Emotion FER+) using ONNX Runtime on Azure ML\n", "\n", "This example shows how to deploy an image classification neural network using the Facial Expression Recognition ([FER](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data)) dataset and Open Neural Network eXchange format ([ONNX](http://aka.ms/onnxdocarticle)) on the Azure Machine Learning platform. This tutorial will show you how to deploy a FER+ model from the [ONNX model zoo](https://github.com/onnx/models), use it to make predictions using ONNX Runtime Inference, and deploy it as a web service in Azure.\n", "\n", @@ -34,32 +34,54 @@ "## Prerequisites\n", "\n", "### 1. Install Azure ML SDK and create a new workspace\n", - "Please follow [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook.\n", - "\n", + "Please follow [Azure ML configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) to set up your environment.\n", "\n", "### 2. Install additional packages needed for this Notebook\n", - "You need to install the popular plotting library `matplotlib`, the image manipulation library `PIL`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n", + "You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n", "\n", "```sh\n", - "(myenv) $ pip install matplotlib onnx Pillow\n", + "(myenv) $ pip install matplotlib onnx opencv-python\n", "```\n", "\n", + "**Debugging tip**: Make sure that to activate your virtual environment (myenv) before you re-launch this notebook using the `jupyter notebook` comand. Choose the respective Python kernel for your new virtual environment using the `Kernel > Change Kernel` menu above. If you have completed the steps correctly, the upper right corner of your screen should state `Python [conda env:myenv]` instead of `Python [default]`.\n", + "\n", "### 3. Download sample data and pre-trained ONNX model from ONNX Model Zoo.\n", "\n", - "[Download the ONNX Emotion FER+ model and corresponding test data](https://www.cntk.ai/OnnxModels/emotion_ferplus/opset_7/emotion_ferplus.tar.gz) and place them in the same folder as this tutorial notebook. You can unzip the file through the following line of code.\n", + "In the following lines of code, we download [the trained ONNX Emotion FER+ model and corresponding test data](https://github.com/onnx/models/tree/master/emotion_ferplus) and place them in the same folder as this tutorial notebook. For more information about the FER+ dataset, please visit Microsoft Researcher Emad Barsoum's [FER+ source data repository](https://github.com/ebarsoum/FERPlus)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# urllib is a built-in Python library to download files from URLs\n", "\n", - "```sh\n", - "(myenv) $ tar xvzf emotion_ferplus.tar.gz\n", - "```\n", + "# Objective: retrieve the latest version of the ONNX Emotion FER+ model files from the\n", + "# ONNX Model Zoo and save it in the same folder as this tutorial\n", "\n", - "More information can be found about the ONNX FER+ model on [github](https://github.com/onnx/models/tree/master/emotion_ferplus). For more information about the FER+ dataset, please visit Microsoft Researcher Emad Barsoum's [FER+ source data repository](https://github.com/ebarsoum/FERPlus)." + "import urllib.request\n", + "\n", + "onnx_model_url = \"https://www.cntk.ai/OnnxModels/emotion_ferplus/opset_7/emotion_ferplus.tar.gz\"\n", + "\n", + "urllib.request.urlretrieve(onnx_model_url, filename=\"emotion_ferplus.tar.gz\")\n", + "\n", + "# the ! magic command tells our jupyter notebook kernel to run the following line of \n", + "# code from the command line instead of the notebook kernel\n", + "\n", + "# We use tar and xvcf to unzip the files we just retrieved from the ONNX model zoo\n", + "\n", + "!tar xvzf emotion_ferplus.tar.gz" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Load Azure ML workspace\n", + "## Deploy a VM with your ONNX model in the Cloud\n", + "\n", + "### Load Azure ML workspace\n", "\n", "We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook." ] @@ -147,9 +169,9 @@ "source": [ "### ONNX FER+ Model Methodology\n", "\n", - "The image classification model we are using is pre-trained using Microsoft's deep learning cognitive toolkit, [CNTK](https://github.com/Microsoft/CNTK), from the [ONNX model zoo](http://github.com/onnx/models). The model zoo has many other models that can be deployed on cloud providers like AzureML without any additional training. To ensure that our cloud deployed model works, we use testing data from the famous FER+ data set, provided as part of the [trained Emotion Recognition model](https://github.com/onnx/models/tree/master/emotion_ferplus) in the ONNX model zoo.\n", + "The image classification model we are using is pre-trained using Microsoft's deep learning cognitive toolkit, [CNTK](https://github.com/Microsoft/CNTK), from the [ONNX model zoo](http://github.com/onnx/models). The model zoo has many other models that can be deployed on cloud providers like AzureML without any additional training. To ensure that our cloud deployed model works, we use testing data from the well-known FER+ data set, provided as part of the [trained Emotion Recognition model](https://github.com/onnx/models/tree/master/emotion_ferplus) in the ONNX model zoo.\n", "\n", - "The original Facial Emotion Recognition (FER) Dataset was released in 2013, but some of the labels are not entirely appropriate for the expression. In the FER+ Dataset, each photo was evaluated by at least 10 croud sourced reviewers, creating a better basis for ground truth. \n", + "The original Facial Emotion Recognition (FER) Dataset was released in 2013 by Pierre-Luc Carrier and Aaron Courville as part of a [Kaggle Competition](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data), but some of the labels are not entirely appropriate for the expression. In the FER+ Dataset, each photo was evaluated by at least 10 croud sourced reviewers, creating a more accurate basis for ground truth. \n", "\n", "You can see the difference of label quality in the sample model input below. The FER labels are the first word below each image, and the FER+ labels are the second word below each image.\n", "\n", @@ -202,20 +224,18 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Deploy our model on Azure ML" + "### Specify our Score and Environment Files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file.\n", - "\n", - "You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n", + "We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file. You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n", "\n", "### Write Score File\n", "\n", - "A score file is what tells our Azure cloud service what to do. After initializing our model using azureml.core.model, we start an ONNX Runtime GPU inference session to evaluate the data passed in on our function calls." + "A score file is what tells our Azure cloud service what to do. After initializing our model using azureml.core.model, we start an ONNX Runtime inference session to evaluate the data passed in on our function calls." ] }, { @@ -248,10 +268,13 @@ " try:\n", " # load in our data, convert to readable format\n", " data = np.array(json.loads(input_data)['data']).astype('float32')\n", + " \n", " start = time.time()\n", " r = session.run([output_name], {input_name : data})\n", " end = time.time()\n", + " \n", " result = emotion_map(postprocess(r[0]))\n", + " \n", " result_dict = {\"result\": result,\n", " \"time_in_sec\": [end - start]}\n", " except Exception as e:\n", @@ -260,9 +283,12 @@ " return json.dumps(result_dict)\n", "\n", "def emotion_map(classes, N=1):\n", - " \"\"\"Take the most probable labels (output of postprocess) and returns the top N emotional labels that fit the picture.\"\"\"\n", + " \"\"\"Take the most probable labels (output of postprocess) and returns the \n", + " top N emotional labels that fit the picture.\"\"\"\n", + " \n", + " emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, \n", + " 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}\n", " \n", - " emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}\n", " emotion_keys = list(emotion_table.keys())\n", " emotions = []\n", " for i in range(N):\n", @@ -276,8 +302,8 @@ " return e_x / e_x.sum(axis=0)\n", "\n", "def postprocess(scores):\n", - " \"\"\"This function takes the scores generated by the network and returns the class IDs in decreasing \n", - " order of probability.\"\"\"\n", + " \"\"\"This function takes the scores generated by the network and \n", + " returns the class IDs in decreasing order of probability.\"\"\"\n", " prob = softmax(scores)\n", " prob = np.squeeze(prob)\n", " classes = np.argsort(prob)[::-1]\n", @@ -329,7 +355,7 @@ "image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n", " runtime = \"python\",\n", " conda_file = \"myenv.yml\",\n", - " description = \"test\",\n", + " description = \"Emotion ONNX Runtime container\",\n", " tags = {\"demo\": \"onnx\"})\n", "\n", "\n", @@ -346,8 +372,6 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Debugging\n", - "\n", "In case you need to debug your code, the next line of code accesses the log file." ] }, @@ -364,9 +388,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We're all set! Let's get our model chugging.\n", + "We're all done specifying what we want our virtual machine to do. Let's configure and deploy our container image.\n", "\n", - "## Deploy the container image" + "### Deploy the container image" ] }, { @@ -439,23 +463,57 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Testing and Evaluation" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Useful Helper Functions\n", + "## Testing and Evaluation\n", + "\n", + "### Useful Helper Functions\n", "\n", "We preprocess and postprocess our data (see score.py file) using the helper functions specified in the [ONNX FER+ Model page in the Model Zoo repository](https://github.com/onnx/models/tree/master/emotion_ferplus)." ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def emotion_map(classes, N=1):\n", + " \"\"\"Take the most probable labels (output of postprocess) and returns the \n", + " top N emotional labels that fit the picture.\"\"\"\n", + " \n", + " emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, \n", + " 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}\n", + " \n", + " emotion_keys = list(emotion_table.keys())\n", + " emotions = []\n", + " for i in range(N):\n", + " emotions.append(emotion_keys[classes[i]])\n", + " \n", + " return emotions\n", + "\n", + "def softmax(x):\n", + " \"\"\"Compute softmax values (probabilities from 0 to 1) for each possible label.\"\"\"\n", + " x = x.reshape(-1)\n", + " e_x = np.exp(x - np.max(x))\n", + " return e_x / e_x.sum(axis=0)\n", + "\n", + "def postprocess(scores):\n", + " \"\"\"This function takes the scores generated by the network and \n", + " returns the class IDs in decreasing order of probability.\"\"\"\n", + " prob = softmax(scores)\n", + " prob = np.squeeze(prob)\n", + " classes = np.argsort(prob)[::-1]\n", + " return classes" + ] + }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### Load Test Data" + "### Load Test Data\n", + "\n", + "These are already in your directory from your ONNX model download (from the model zoo).\n", + "\n", + "Notice that our Model Zoo files have a .pb extension. This is because they are [protobuf files (Protocol Buffers)](https://developers.google.com/protocol-buffers/docs/pythontutorial), so we need to read in our data through our ONNX TensorProto reader into a format we can work with, like numerical arrays." ] }, { @@ -475,8 +533,6 @@ "import json\n", "import os\n", "\n", - "from score import emotion_map, softmax, postprocess\n", - "\n", "test_inputs = []\n", "test_outputs = []\n", "\n", @@ -512,7 +568,7 @@ }, "source": [ "### Show some sample images\n", - "We use `matplotlib` to plot 3 test images from the model zoo with their labels over them." + "We use `matplotlib` to plot 3 test images from the dataset." ] }, { @@ -532,7 +588,7 @@ " plt.axhline('')\n", " plt.axvline('')\n", " plt.text(x = 10, y = -10, s = test_outputs[test_image], fontsize = 18)\n", - " plt.imshow(test_inputs[test_image].reshape(64, 64), cmap = plt.cm.Greys)\n", + " plt.imshow(test_inputs[test_image].reshape(64, 64), cmap = plt.cm.gray)\n", "plt.show()" ] }, @@ -571,7 +627,7 @@ " print(r['error'])\n", " break\n", " \n", - " result = r['result'][0][0]\n", + " result = r['result'][0]\n", " time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n", " \n", " ground_truth = test_outputs[i]\n", @@ -583,7 +639,7 @@ "\n", " # use different color for misclassified sample\n", " font_color = 'red' if ground_truth != result else 'black'\n", - " clr_map = plt.cm.gray if ground_truth != result else plt.cm.Greys\n", + " clr_map = plt.cm.Greys if ground_truth != result else plt.cm.gray\n", "\n", " # ground truth labels are in blue\n", " plt.text(x = 10, y = -70, s = ground_truth, fontsize = 18, color = 'blue')\n", @@ -611,15 +667,30 @@ "metadata": {}, "outputs": [], "source": [ - "from PIL import Image\n", + "# Preprocessing functions take your image and format it so it can be passed\n", + "# as input into our ONNX model\n", "\n", - "def preprocess(image_path):\n", - " input_shape = (1, 1, 64, 64)\n", - " img = Image.open(image_path)\n", - " img = img.resize((64, 64), Image.ANTIALIAS)\n", - " img_data = np.array(img)\n", - " img_data = np.resize(img_data, input_shape)\n", - " return img_data" + "import cv2\n", + "\n", + "def rgb2gray(rgb):\n", + " \"\"\"Convert the input image into grayscale\"\"\"\n", + " return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n", + "\n", + "def resize_img(img):\n", + " \"\"\"Resize image to MNIST model input dimensions\"\"\"\n", + " img = cv2.resize(img, dsize=(64, 64), interpolation=cv2.INTER_AREA)\n", + " img.resize((1, 1, 64, 64))\n", + " return img\n", + "\n", + "def preprocess(img):\n", + " \"\"\"Resize input images and convert them to grayscale.\"\"\"\n", + " if img.shape == (64, 64):\n", + " img.resize((1, 1, 64, 64))\n", + " return img\n", + " \n", + " grayscale = rgb2gray(img)\n", + " processed_img = resize_img(grayscale)\n", + " return processed_img" ] }, { @@ -636,12 +707,15 @@ "\n", "# e.g. your_test_image = \"C://Users//vinitra.swamy//Pictures//emotion_test_images//img_1.png\"\n", "\n", - "your_test_image = \"\"\n", + "import matplotlib.image as mpimg\n", "\n", "if your_test_image != \"\":\n", - " img = preprocess(your_test_image)\n", + " img = mpimg.imread(your_test_image)\n", " plt.subplot(1,3,1)\n", - " plt.imshow(img.reshape((64,64)), cmap = plt.cm.gray)\n", + " plt.imshow(img, cmap = plt.cm.Greys)\n", + " print(\"Old Dimensions: \", img.shape)\n", + " img = preprocess(img)\n", + " print(\"New Dimensions: \", img.shape)\n", "else:\n", " img = None" ] @@ -659,7 +733,7 @@ "\n", " try:\n", " r = json.loads(aci_service.run(input_data))\n", - " result = r['result'][0][0]\n", + " result = r['result'][0]\n", " time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n", " except Exception as e:\n", " print(str(e))\n", @@ -668,12 +742,13 @@ " plt.subplot(1,8,1)\n", " plt.axhline('')\n", " plt.axvline('')\n", - " plt.text(x = -10, y = -35, s = \"Model prediction: \", fontsize = 14)\n", - " plt.text(x = -10, y = -20, s = \"Inference time: \", fontsize = 14)\n", - " plt.text(x = 100, y = -35, s = str(result), fontsize = 14)\n", - " plt.text(x = 100, y = -20, s = str(time_ms) + \" ms\", fontsize = 14)\n", - " plt.text(x = -10, y = -8, s = \"Input image: \", fontsize = 14)\n", - " plt.imshow(img.reshape(64, 64), cmap = plt.cm.gray) " + " plt.text(x = -10, y = -40, s = \"Model prediction: \", fontsize = 14)\n", + " plt.text(x = -10, y = -25, s = \"Inference time: \", fontsize = 14)\n", + " plt.text(x = 100, y = -40, s = str(result), fontsize = 14)\n", + " plt.text(x = 100, y = -25, s = str(time_ms) + \" ms\", fontsize = 14)\n", + " plt.text(x = -10, y = -10, s = \"Model Input image: \", fontsize = 14)\n", + " plt.imshow(img.reshape((64, 64)), cmap = plt.cm.gray) \n", + " " ] }, { @@ -684,7 +759,7 @@ "source": [ "# remember to delete your service after you are done using it!\n", "\n", - "# aci_service.delete()" + "aci_service.delete()" ] }, { @@ -709,9 +784,9 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python [conda env:myenv]", "language": "python", - "name": "python3" + "name": "conda-env-myenv-py" }, "language_info": { "codemirror_mode": { @@ -723,7 +798,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.5" + "version": "3.6.6" }, "msauthor": "vinitra.swamy" }, diff --git a/onnx/onnx-inference-mnist.ipynb b/onnx/onnx-inference-mnist.ipynb index c49d5700..42a8c2dc 100644 --- a/onnx/onnx-inference-mnist.ipynb +++ b/onnx/onnx-inference-mnist.ipynb @@ -12,7 +12,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Handwritten Digit Classification (MNIST) using ONNX Runtime on AzureML\n", + "# Handwritten Digit Classification (MNIST) using ONNX Runtime on Azure ML\n", "\n", "This example shows how to deploy an image classification neural network using the Modified National Institute of Standards and Technology ([MNIST](http://yann.lecun.com/exdb/mnist/)) dataset and Open Neural Network eXchange format ([ONNX](http://aka.ms/onnxdocarticle)) on the Azure Machine Learning platform. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing number from 0 to 9. This tutorial will show you how to deploy a MNIST model from the [ONNX model zoo](https://github.com/onnx/models), use it to make predictions using ONNX Runtime Inference, and deploy it as a web service in Azure.\n", "\n", @@ -22,9 +22,9 @@ "\n", "#### Tutorial Objectives:\n", "\n", - "1. Describe the MNIST dataset and pretrained Convolutional Neural Net ONNX model, stored in the ONNX model zoo.\n", - "2. Deploy and run the pretrained MNIST ONNX model on an Azure Machine Learning instance\n", - "3. Predict labels for test set data points in the cloud using ONNX Runtime and Azure ML" + "- Describe the MNIST dataset and pretrained Convolutional Neural Net ONNX model, stored in the ONNX model zoo.\n", + "- Deploy and run the pretrained MNIST ONNX model on an Azure Machine Learning instance\n", + "- Predict labels for test set data points in the cloud using ONNX Runtime and Azure ML" ] }, { @@ -34,31 +34,61 @@ "## Prerequisites\n", "\n", "### 1. Install Azure ML SDK and create a new workspace\n", - "Please follow [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook.\n", + "Please follow [Azure ML configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) to set up your environment.\n", "\n", - "### 2. Install additional packages needed for this Notebook\n", - "You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n", + "### 2. Install additional packages needed for this tutorial notebook\n", + "You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed. \n", "\n", "```sh\n", "(myenv) $ pip install matplotlib onnx opencv-python\n", "```\n", "\n", + "**Debugging tip**: Make sure that you run the \"jupyter notebook\" command to launch this notebook after activating your virtual environment. Choose the respective Python kernel for your new virtual environment using the `Kernel > Change Kernel` menu above. If you have completed the steps correctly, the upper right corner of your screen should state `Python [conda env:myenv]` instead of `Python [default]`.\n", + "\n", "### 3. Download sample data and pre-trained ONNX model from ONNX Model Zoo.\n", "\n", - "[Download the ONNX MNIST model and corresponding test data](https://www.cntk.ai/OnnxModels/mnist/opset_7/mnist.tar.gz) and place them in the same folder as this tutorial notebook. You can unzip the file through the following line of code.\n", + "In the following lines of code, we download [the trained ONNX MNIST model and corresponding test data](https://github.com/onnx/models/tree/master/mnist) and place them in the same folder as this tutorial notebook. For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# urllib is a built-in Python library to download files from URLs\n", "\n", - "```sh\n", - "(myenv) $ tar xvzf mnist.tar.gz\n", - "```\n", + "# Objective: retrieve the latest version of the ONNX MNIST model files from the\n", + "# ONNX Model Zoo and save it in the same folder as this tutorial\n", "\n", - "More information can be found about the ONNX MNIST model on [github](https://github.com/onnx/models/tree/master/mnist). For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/)." + "import urllib.request\n", + "\n", + "onnx_model_url = \"https://www.cntk.ai/OnnxModels/mnist/opset_7/mnist.tar.gz\"\n", + "\n", + "urllib.request.urlretrieve(onnx_model_url, filename=\"mnist.tar.gz\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# the ! magic command tells our jupyter notebook kernel to run the following line of \n", + "# code from the command line instead of the notebook kernel\n", + "\n", + "# We use tar and xvcf to unzip the files we just retrieved from the ONNX model zoo\n", + "\n", + "!tar xvzf mnist.tar.gz" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Load Azure ML workspace\n", + "## Deploy a VM with your ONNX model in the Cloud\n", + "\n", + "### Load Azure ML workspace\n", "\n", "We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook." ] @@ -113,11 +143,11 @@ "source": [ "from azureml.core.model import Model\n", "\n", - "model = Model.register(model_path = model_dir + \"//model.onnx\",\n", + "model = Model.register(workspace = ws,\n", + " model_path = model_dir + \"/\" + \"model.onnx\",\n", " model_name = \"mnist_1\",\n", " tags = {\"onnx\": \"demo\"},\n", - " description = \"MNIST image classification CNN from ONNX Model Zoo\",\n", - " workspace = ws)" + " description = \"MNIST image classification CNN from ONNX Model Zoo\",)" ] }, { @@ -188,16 +218,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Deploy our model on Azure ML" + "### Specify our Score and Environment Files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file.\n", - "\n", - "You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n", + "We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file. You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n", "\n", "### Write Score File\n", "\n", @@ -248,7 +276,7 @@ " return json.dumps(result_dict)\n", "\n", "def choose_class(result_prob):\n", - " \"\"\"We use argmax to determine the right label to choose from our output, after calling softmax on the 10 numbers we receive\"\"\"\n", + " \"\"\"We use argmax to determine the right label to choose from our output\"\"\"\n", " return int(np.argmax(result_prob, axis=0))" ] }, @@ -256,14 +284,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Write Environment File" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This step creates a YAML file that specifies which dependencies we would like to see in our Linux Virtual Machine." + "### Write Environment File\n", + "\n", + "This step creates a YAML environment file that specifies which dependencies we would like to see in our Linux Virtual Machine." ] }, { @@ -289,10 +312,19 @@ "metadata": {}, "source": [ "### Create the Container Image\n", - "\n", "This step will likely take a few minutes." ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.image import ContainerImage\n", + "help(ContainerImage.image_configuration)" + ] + }, { "cell_type": "code", "execution_count": null, @@ -304,8 +336,8 @@ "image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n", " runtime = \"python\",\n", " conda_file = \"myenv.yml\",\n", - " description = \"test\",\n", - " tags = {\"demo\": \"onnx\"}) )\n", + " description = \"MNIST ONNX Runtime container\",\n", + " tags = {\"demo\": \"onnx\"}) \n", "\n", "\n", "image = ContainerImage.create(name = \"onnxtest\",\n", @@ -321,8 +353,6 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Debugging\n", - "\n", "In case you need to debug your code, the next line of code accesses the log file." ] }, @@ -339,9 +369,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We're all set! Let's get our model chugging.\n", + "We're all done specifying what we want our virtual machine to do. Let's configure and deploy our container image.\n", "\n", - "## Deploy the container image" + "### Deploy the container image" ] }, { @@ -373,7 +403,7 @@ "source": [ "from azureml.core.webservice import Webservice\n", "\n", - "aci_service_name = 'onnx-demo-mnist'\n", + "aci_service_name = 'onnx-demo-mnist20'\n", "print(\"Service\", aci_service_name)\n", "\n", "aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n", @@ -414,16 +444,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Testing and Evaluation" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Load Test Data\n", + "## Testing and Evaluation\n", "\n", - "These are already in your directory from your ONNX model download (from the model zoo). If you didn't place your model and test data in the same directory as this notebook, edit the \"model_dir\" filename below." + "### Load Test Data\n", + "\n", + "These are already in your directory from your ONNX model download (from the model zoo).\n", + "\n", + "Notice that our Model Zoo files have a .pb extension. This is because they are [protobuf files (Protocol Buffers)](https://developers.google.com/protocol-buffers/docs/pythontutorial), so we need to read in our data through our ONNX TensorProto reader into a format we can work with, like numerical arrays." ] }, { @@ -579,7 +606,9 @@ "metadata": {}, "outputs": [], "source": [ - "# Preprocessing functions\n", + "# Preprocessing functions take your image and format it so it can be passed\n", + "# as input into our ONNX model\n", + "\n", "import cv2\n", "\n", "def rgb2gray(rgb):\n", @@ -587,12 +616,17 @@ " return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n", "\n", "def resize_img(img):\n", + " \"\"\"Resize image to MNIST model input dimensions\"\"\"\n", " img = cv2.resize(img, dsize=(28, 28), interpolation=cv2.INTER_AREA)\n", " img.resize((1, 1, 28, 28))\n", " return img\n", "\n", "def preprocess(img):\n", " \"\"\"Resize input images and convert them to grayscale.\"\"\"\n", + " if img.shape == (28, 28):\n", + " img.resize((1, 1, 28, 28))\n", + " return img\n", + " \n", " grayscale = rgb2gray(img)\n", " processed_img = resize_img(grayscale)\n", " return processed_img" @@ -608,11 +642,8 @@ "# Make sure your image is square and the dimensions are equal (i.e. 100 * 100 pixels or 28 * 28 pixels)\n", "\n", "# Any PNG or JPG image file should work\n", - "# Make sure to include the entire path with // instead of /\n", "\n", - "# e.g. your_test_image = \"C://Users//vinitra.swamy//Pictures//digit.png\"\n", - "\n", - "your_test_image = \"\"\n", + "# e.g. your_test_image = \"C:/Users/vinitra.swamy/Pictures/handwritten_digit.png\"\n", "\n", "import matplotlib.image as mpimg\n", "\n", @@ -721,7 +752,7 @@ "source": [ "# remember to delete your service after you are done using it!\n", "\n", - "# aci_service.delete()" + "aci_service.delete()" ] }, { @@ -738,16 +769,16 @@ "- ensured that your deep learning model is working perfectly (in the cloud) on test data, and checked it against some of your own!\n", "\n", "Next steps:\n", - "- Check out another interesting application based on a Microsoft Research computer vision paper that lets you set up a [facial emotion recognition model](https://github.com/Azure/MachineLearningNotebooks/tree/master/onnx/onnx-inference-emotion-recognition.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model in an Azure ML virtual machine with GPU support.\n", + "- Check out another interesting application based on a Microsoft Research computer vision paper that lets you set up a [facial emotion recognition model](https://github.com/Azure/MachineLearningNotebooks/tree/master/onnx/onnx-inference-emotion-recognition.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model in an Azure ML virtual machine.\n", "- Contribute to our [open source ONNX repository on github](http://github.com/onnx/onnx) and/or add to our [ONNX model zoo](http://github.com/onnx/models)" ] } ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "Python [conda env:myenv]", "language": "python", - "name": "python3" + "name": "conda-env-myenv-py" }, "language_info": { "codemirror_mode": { @@ -759,7 +790,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.5" + "version": "3.6.6" }, "msauthor": "vinitra.swamy" },