Files
MachineLearningNotebooks/project-brainwave/project-brainwave-quickstart.ipynb
2018-11-05 15:27:36 -05:00

312 lines
11 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure ML Hardware Accelerated Models Quickstart"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This tutorial will show you how to deploy an image recognition service based on the ResNet 50 classifier in just a few minutes using the Azure Machine Learning Accelerated AI service. Get more help from our [documentation](https://aka.ms/aml-real-time-ai) or [forum](https://aka.ms/aml-forum).\n",
"\n",
"We will use an accelerated ResNet50 featurizer running on an FPGA. This functionality is powered by Project Brainwave, which handles translating deep neural networks (DNN) into an FPGA program.\n",
"\n",
"## Request Quota\n",
"**IMPORTANT:** You must [request quota](https://aka.ms/aml-real-time-ai-request) and be approved before you can successfully run this notebook. Notebook 00 will show you how to create a workspace which you can use to request quota."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import tensorflow as tf"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Image preprocessing\n",
"We'd like our service to accept JPEG images as input. However the input to ResNet50 is a tensor. So we need code that decodes JPEG images and does the preprocessing required by ResNet50. The Accelerated AI service can execute TensorFlow graphs as part of the service and we'll use that ability to do the image preprocessing. This code defines a TensorFlow graph that preprocesses an array of JPEG images (as strings) and produces a tensor that is ready to be featurized by ResNet50."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Input images as a two-dimensional tensor containing an arbitrary number of images represented a strings\n",
"import azureml.contrib.brainwave.models.utils as utils\n",
"in_images = tf.placeholder(tf.string)\n",
"image_tensors = utils.preprocess_array(in_images)\n",
"print(image_tensors.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Featurizer\n",
"We use ResNet50 as a featurizer. In this step we initialize the model. This downloads a TensorFlow checkpoint of the quantized ResNet50."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.contrib.brainwave.models import QuantizedResnet50\n",
"model_path = os.path.expanduser('~/models')\n",
"model = QuantizedResnet50(model_path, is_frozen = True)\n",
"feature_tensor = model.import_graph_def(image_tensors)\n",
"print(model.version)\n",
"print(feature_tensor.name)\n",
"print(feature_tensor.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Classifier\n",
"The model we downloaded includes a classifier which takes the output of the ResNet50 and identifies an image. This classifier is trained on the ImageNet dataset. We are going to use this classifier for our service. The next [notebook](project-brainwave-trainsfer-learning.ipynb) shows how to train a classifier for a different data set. The input to the classifier is a tensor matching the output of our ResNet50 featurizer."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"classifier_output = model.get_default_classifier(feature_tensor)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Service Definition\n",
"Now that we've definied the image preprocessing, featurizer, and classifier that we will execute on our service we can create a service definition. The service definition is a set of files generated from the model that allow us to deploy to the FPGA service. The service definition consists of a pipeline. The pipeline is a series of stages that are executed in order. We support TensorFlow stages, Keras stages, and BrainWave stages. The stages will be executed in order on the service, with the output of each stage input into the subsequent stage.\n",
"\n",
"To create a TensorFlow stage we specify a session containing the graph (in this case we are using the default graph) and the input and output tensors to this stage. We use this information to save the graph so that we can execute it on the service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.contrib.brainwave.pipeline import ModelDefinition, TensorflowStage, BrainWaveStage\n",
"\n",
"save_path = os.path.expanduser('~/models/save')\n",
"model_def_path = os.path.join(save_path, 'model_def.zip')\n",
"\n",
"model_def = ModelDefinition()\n",
"with tf.Session() as sess:\n",
" model_def.pipeline.append(TensorflowStage(sess, in_images, image_tensors))\n",
" model_def.pipeline.append(BrainWaveStage(sess, model))\n",
" model_def.pipeline.append(TensorflowStage(sess, feature_tensor, classifier_output))\n",
" model_def.save(model_def_path)\n",
" print(model_def_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy\n",
"Time to create a service from the service definition. You need a Workspace in the **East US 2** location. In the previous notebooks, you've created this Workspace. The code below will load that Workspace from a configuration file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Upload the model to the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.model import Model\n",
"model_name = \"resnet-50-rtai\"\n",
"registered_model = Model.register(ws, model_def_path, model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a service from the model that we registered. If this is a new service then we create it. If you already have a service with this name then the existing service will be updated to use this model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"from azureml.exceptions import WebserviceException\n",
"from azureml.contrib.brainwave import BrainwaveWebservice, BrainwaveImage\n",
"service_name = \"imagenet-infer\"\n",
"service = None\n",
"try:\n",
" service = Webservice(ws, service_name)\n",
"except WebserviceException:\n",
" image_config = BrainwaveImage.image_configuration()\n",
" deployment_config = BrainwaveWebservice.deploy_configuration()\n",
" service = Webservice.deploy_from_model(ws, service_name, [registered_model], image_config, deployment_config)\n",
" service.wait_for_deployment(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Client\n",
"The service supports gRPC and the TensorFlow Serving \"predict\" API. We provide a client that can call the service to get predictions on aka.ms/rtai. You can also invoke the service like any other web service."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To understand the results we need a mapping to the human readable imagenet classes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"classes_entries = requests.get(\"https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt\").text.splitlines()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now send an image to the service and get the predictions. Let's see if it can identify a snow leopard.\n",
"![title](snowleopardgaze.jpg)\n",
"Snow leopard in a zoo. Photo by Peter Bolliger.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"results = service.run('snowleopardgaze.jpg')\n",
"# map results [class_id] => [confidence]\n",
"results = enumerate(results)\n",
"# sort results by confidence\n",
"sorted_results = sorted(results, key=lambda x: x[1], reverse=True)\n",
"# print top 5 results\n",
"for top in sorted_results[:5]:\n",
" print(classes_entries[top[0]], 'confidence:', top[1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cleanup\n",
"Run the cell below to delete your service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Congratulations! You've just created a service that does predictions using an FPGA. The next [notebook](project-brainwave-trainsfer-learning.ipynb) shows how to customize the service using transfer learning to classify different types of images."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "coverste"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}