diff --git a/how-to-use-azureml/deployment/accelerated-models/README.md b/how-to-use-azureml/deployment/accelerated-models/README.md index fca1fba5..23ad499b 100644 --- a/how-to-use-azureml/deployment/accelerated-models/README.md +++ b/how-to-use-azureml/deployment/accelerated-models/README.md @@ -9,7 +9,7 @@ Easily create and train a model using various deep neural networks (DNNs) as a f * VGG-16 * SSD-VGG -To learn more about the azureml-accel-model classes, see the section [Model Classes](#model-classes) below or the [Azure ML Python SDK documentation](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py). +To learn more about the azureml-accel-model classes, see the section [Model Classes](#model-classes) below or the [Azure ML Accel Models SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-accel-models/azureml.accel?view=azure-ml-py). ### Step 1: Create an Azure ML workspace Follow [these instructions](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python) to install the Azure ML SDK on your local machine, create an Azure ML workspace, and set up your notebook environment, which is required for the next step. @@ -19,21 +19,21 @@ Once you have set up your environment, install the Azure ML Accel Models SDK. Th If you already have tensorflow >= 1.6,<2.0 installed in your development environment, you can install the SDK package using: -`` +``` pip install azureml-accel-models -`` +``` If you do not have tensorflow >= 1.6,<2.0 and are using a CPU-only development environment, our SDK with tensorflow can be installed using: -`` +``` pip install azureml-accel-models[cpu] -`` +``` If your machine supports GPU (for example, on an [Azure DSVM](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview)), then you can leverage the tensorflow-gpu functionality using: -`` +``` pip install azureml-accel-models[gpu] -`` +``` ### Step 3: Follow our notebooks @@ -49,13 +49,28 @@ As stated above, we support 5 Accelerated Models. Here's more information on the **Available models and output tensors** The available models and the corresponding default classifier output tensors are below. This is the value that you would use during inferencing if you used the default classifier. -* Resnet50, QuantizedResnet50 (output_tensors = "classifier_1/resnet_v1_50/predictions/Softmax:0") -* Resnet152, QuantizedResnet152 (output_tensors = "classifier/resnet_v1_152/predictions/Softmax:0") -* Densenet121, QuantizedDensenet121 (output_tensors = "classifier/densenet121/predictions/Softmax:0") -* Vgg16, QuantizedVgg16 (output_tensors = "classifier/vgg_16/fc8/squeezed:0") -* SsdVgg, QuantizedSsdVgg (output_tensors = ['ssd_300_vgg/block4_box/Reshape_1:0', 'ssd_300_vgg/block7_box/Reshape_1:0', 'ssd_300_vgg/block8_box/Reshape_1:0', 'ssd_300_vgg/block9_box/Reshape_1:0', 'ssd_300_vgg/block10_box/Reshape_1:0', 'ssd_300_vgg/block11_box/Reshape_1:0', 'ssd_300_vgg/block4_box/Reshape:0', 'ssd_300_vgg/block7_box/Reshape:0', 'ssd_300_vgg/block8_box/Reshape:0', 'ssd_300_vgg/block9_box/Reshape:0', 'ssd_300_vgg/block10_box/Reshape:0', 'ssd_300_vgg/block11_box/Reshape:0']) +* Resnet50, QuantizedResnet50 +`` +output_tensors = "classifier_1/resnet_v1_50/predictions/Softmax:0" +`` +* Resnet152, QuantizedResnet152 +`` +output_tensors = "classifier/resnet_v1_152/predictions/Softmax:0" +`` +* Densenet121, QuantizedDensenet121 +`` +output_tensors = "classifier/densenet121/predictions/Softmax:0" +`` +* Vgg16, QuantizedVgg16 +`` +output_tensors = "classifier/vgg_16/fc8/squeezed:0" +`` +* SsdVgg, QuantizedSsdVgg +`` +output_tensors = ['ssd_300_vgg/block4_box/Reshape_1:0', 'ssd_300_vgg/block7_box/Reshape_1:0', 'ssd_300_vgg/block8_box/Reshape_1:0', 'ssd_300_vgg/block9_box/Reshape_1:0', 'ssd_300_vgg/block10_box/Reshape_1:0', 'ssd_300_vgg/block11_box/Reshape_1:0', 'ssd_300_vgg/block4_box/Reshape:0', 'ssd_300_vgg/block7_box/Reshape:0', 'ssd_300_vgg/block8_box/Reshape:0', 'ssd_300_vgg/block9_box/Reshape:0', 'ssd_300_vgg/block10_box/Reshape:0', 'ssd_300_vgg/block11_box/Reshape:0'] +`` -For more information, please reference the azureml.accel.models package in the [Azure ML Python SDK documentation](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/?view=azure-ml-py). +For more information, please reference the azureml.accel.models package in the [Azure ML Python SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-accel-models/azureml.accel.models?view=azure-ml-py). **Input tensors** diff --git a/how-to-use-azureml/deployment/accelerated-models/accelerated-models-object-detection.ipynb b/how-to-use-azureml/deployment/accelerated-models/accelerated-models-object-detection.ipynb index 4338aa67..be6894e8 100644 --- a/how-to-use-azureml/deployment/accelerated-models/accelerated-models-object-detection.ipynb +++ b/how-to-use-azureml/deployment/accelerated-models/accelerated-models-object-detection.ipynb @@ -203,7 +203,7 @@ "source": [ "\n", "## 3. Create AccelContainerImage\n", - "Below we will execute all the same steps as in the [Quickstart](accelerated-models-quickstart.ipynb) to package the model we have saved locally into an accelerated Docker image saved in our workspace. To complete all the steps, it may take a few minutes. For more details on each step, check out the [Quickstart section on model registration](accelerated-models-quickstart.ipynb#register-model)." + "Below we will execute all the same steps as in the [Quickstart](./accelerated-models-quickstart.ipynb#create-image) to package the model we have saved locally into an accelerated Docker image saved in our workspace. To complete all the steps, it may take a few minutes. For more details on each step, check out the [Quickstart section on model registration](./accelerated-models-quickstart.ipynb#register-model)." ] }, { @@ -260,7 +260,7 @@ "See the sample [here](https://github.com/Azure-Samples/aml-real-time-ai/) for using the Azure IoT CLI extension for deploying your Docker image to your Databox Edge Machine.\n", "\n", "### 4.b. Deploy to AKS Cluster\n", - "Same as in the [Quickstart section on image deployment](accelerated-models-quickstart.ipynb#deploy-image), we are going to create an AKS cluster with FPGA-enabled machines, then deploy our service to it.\n", + "Same as in the [Quickstart section on image deployment](./accelerated-models-quickstart.ipynb#deploy-image), we are going to create an AKS cluster with FPGA-enabled machines, then deploy our service to it.\n", "#### Create AKS ComputeTarget" ] }, @@ -438,7 +438,7 @@ "source": [ "\n", "## 6. Cleanup\n", - "It's important to clean up your resources, so that you won't incur unnecessary costs. In the [next notebook](accelerated-models-training.ipynb) you will learn how to train a classfier on a new dataset using transfer learning." + "It's important to clean up your resources, so that you won't incur unnecessary costs. In the [next notebook](./accelerated-models-training.ipynb) you will learn how to train a classfier on a new dataset using transfer learning." ] }, { @@ -482,7 +482,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.5.6" + "version": "3.6.0" } }, "nbformat": 4, diff --git a/how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb b/how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb index 46209333..fb6c4976 100644 --- a/how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb +++ b/how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb @@ -20,7 +20,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This tutorial will show you how to deploy an image recognition service based on the ResNet 50 classifier using the Azure Machine Learning Accelerated Models service. Get more information about our service from our [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-accelerate-with-fpgas) or [forum](https://aka.ms/aml-forum).\n", + "This tutorial will show you how to deploy an image recognition service based on the ResNet 50 classifier using the Azure Machine Learning Accelerated Models service. Get more information about our service from our [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-accelerate-with-fpgas), [API reference](https://docs.microsoft.com/en-us/python/api/azureml-accel-models/azureml.accel?view=azure-ml-py), or [forum](https://aka.ms/aml-forum).\n", "\n", "We will use an accelerated ResNet50 featurizer running on an FPGA. Our Accelerated Models Service handles translating deep neural networks (DNN) into an FPGA program.\n", "\n", @@ -137,7 +137,7 @@ "metadata": {}, "source": [ "### 2.c. Classifier\n", - "The model we downloaded includes a classifier which takes the output of the ResNet50 and identifies an image. This classifier is trained on the ImageNet dataset. We are going to use this classifier for our service. The next [notebook](project-brainwave-trainsfer-learning.ipynb) shows how to train a classifier for a different data set. The input to the classifier is a tensor matching the output of our ResNet50 featurizer." + "The model we downloaded includes a classifier which takes the output of the ResNet50 and identifies an image. This classifier is trained on the ImageNet dataset. We are going to use this classifier for our service. The next [notebook](./accelerated-models-training.ipynb) shows how to train a classifier for a different data set. The input to the classifier is a tensor matching the output of our ResNet50 featurizer." ] }, { @@ -492,7 +492,7 @@ "source": [ "\n", "## 8. Clean-up\n", - "Run the cell below to delete your webservice, image, and model (must be done in that order). In the [next notebook](project-brainwave-custom-weights.ipynb) you will learn how to train a classfier on a new dataset using transfer learning and finetune the weights." + "Run the cell below to delete your webservice, image, and model (must be done in that order). In the [next notebook](./accelerated-models-training.ipynb) you will learn how to train a classfier on a new dataset using transfer learning and finetune the weights." ] }, { @@ -536,7 +536,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.5.6" + "version": "3.6.0" } }, "nbformat": 4, diff --git a/how-to-use-azureml/deployment/accelerated-models/accelerated-models-training.ipynb b/how-to-use-azureml/deployment/accelerated-models/accelerated-models-training.ipynb index 0b6bc349..f0d645fe 100644 --- a/how-to-use-azureml/deployment/accelerated-models/accelerated-models-training.ipynb +++ b/how-to-use-azureml/deployment/accelerated-models/accelerated-models-training.ipynb @@ -59,7 +59,7 @@ "source": [ "\n", "## 1. Setup Environment\n", - "#### 1.a. Please set up your environment as described in the [Quickstart](project-brainwave-quickstart.ipynb), meaning:\n", + "#### 1.a. Please set up your environment as described in the [Quickstart](./accelerated-models-quickstart.ipynb), meaning:\n", "* Make sure your Workspace config.json exists and has the correct info\n", "* Install Tensorflow\n", "\n", @@ -398,7 +398,7 @@ "source": [ "\n", "## 6. Execute steps\n", - "You can run through the Transfer Learning section, then skip to Model Deployment. By default, because the custom weights section takes much longer for training twice, it is all saved as Raw NBConvert mode. This means it is not executable, but you can change them to Code type to run through them.\n", + "You can run through the Transfer Learning section, then skip to Create AccelContainerImage. By default, because the custom weights section takes much longer for training twice, it is not saved as executable cells. You can copy the code or change cell type to 'Code'.\n", "\n", "\n", "### 6.a. Training using Transfer Learning" @@ -461,10 +461,11 @@ ] }, { - "cell_type": "raw", + "cell_type": "markdown", "metadata": {}, "source": [ - "# Launch the training\n", + "#### Launch the training\n", + "```\n", "tf.reset_default_graph()\n", "sess = tf.Session(graph=tf.get_default_graph())\n", "\n", @@ -473,7 +474,8 @@ " train_model(preds, in_images, img_train, label_train, is_retrain=False, train_epoch=10) \n", " accuracy = test_model(preds, in_images, img_test, label_test) \n", " print(\"Accuracy:\", accuracy)\n", - " featurizer.save_weights(custom_weights_dir + \"/rn50\", tf.get_default_session())" + " featurizer.save_weights(custom_weights_dir + \"/rn50\", tf.get_default_session())\n", + "```" ] }, { @@ -485,9 +487,10 @@ ] }, { - "cell_type": "raw", + "cell_type": "markdown", "metadata": {}, "source": [ + "```\n", "tf.reset_default_graph()\n", "sess = tf.Session(graph=tf.get_default_graph())\n", "\n", @@ -495,7 +498,8 @@ " print(\"Testing trained model with quantization\")\n", " in_images, image_tensors, features, preds, quantized_featurizer = construct_model(quantized=True, starting_weights_directory=custom_weights_dir)\n", " accuracy = test_model(preds, in_images, img_test, label_test) \n", - " print(\"Accuracy:\", accuracy)" + " print(\"Accuracy:\", accuracy)\n", + "```" ] }, { @@ -507,15 +511,17 @@ ] }, { - "cell_type": "raw", + "cell_type": "markdown", "metadata": {}, "source": [ + "```\n", "if (accuracy < 0.93):\n", " with sess.as_default():\n", " print(\"Fine-tuning model with quantization\")\n", " train_model(preds, in_images, img_train, label_train, is_retrain=True, train_epoch=10)\n", " accuracy = test_model(preds, in_images, img_test, label_test) \n", - " print(\"Accuracy:\", accuracy)" + " print(\"Accuracy:\", accuracy)\n", + "```" ] }, { @@ -526,9 +532,10 @@ ] }, { - "cell_type": "raw", + "cell_type": "markdown", "metadata": {}, "source": [ + "```\n", "model_name = 'resnet50-catsanddogs-cw'\n", "model_save_path = os.path.join(saved_model_dir, model_name)\n", "\n", @@ -537,7 +544,8 @@ " outputs={'output_alias': preds})\n", "\n", "input_tensors = in_images.name\n", - "output_tensors = preds.name" + "output_tensors = preds.name\n", + "```" ] }, { @@ -546,7 +554,8 @@ "source": [ "\n", "## 7. Create AccelContainerImage\n", - "Below we will execute all the same steps as in the [Quickstart](accelerated-models-quickstart.ipynb) to package the model we have saved locally into an accelerated Docker image saved in our workspace. To complete all the steps, it may take a few minutes. For more details on each step, check out the [Quickstart section on model registration](accelerated-models-quickstart.ipynb#register-model)." + "\n", + "Below we will execute all the same steps as in the [Quickstart](./accelerated-models-quickstart.ipynb#create-image) to package the model we have saved locally into an accelerated Docker image saved in our workspace. To complete all the steps, it may take a few minutes. For more details on each step, check out the [Quickstart section on model registration](./accelerated-models-quickstart.ipynb#register-model)." ] }, { @@ -841,7 +850,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.5.6" + "version": "3.6.0" } }, "nbformat": 4,