Compare commits

...

75 Commits

Author SHA1 Message Date
Roope Astala
0b8817ee1c Merge pull request #229 from rastala/master
version 1.0.17
2019-02-25 16:12:51 -05:00
Roope Astala
b7b5576b15 version 1.0.17 2019-02-25 16:12:02 -05:00
Hai Ning
c082b72b71 Update pr.md 2019-02-23 21:55:59 -05:00
Hai Ning
673e76d431 Merge pull request #186 from gison93/master
Fix typos
2019-02-20 23:18:15 -05:00
Hai Ning
c518a04a19 Merge pull request #203 from davidefiocco/patch-1
Typo fix
2019-02-20 23:17:14 -05:00
Hai Ning
2f34888716 Update README.md 2019-02-20 07:52:14 -05:00
Roope Astala
6ca0088991 Merge pull request #218 from jeff-shepherd/master
Fixed broken links to configuration notebook
2019-02-15 14:47:49 -05:00
Jeff Shepherd
40e3856786 Removed subsampling reference, which is not published yet 2019-02-15 11:35:45 -08:00
Jeff Shepherd
ddd025e83e Fixed links to configuration notebook. 2019-02-15 11:31:10 -08:00
Hai Ning
ece4242c8f Update README.md 2019-02-15 12:57:08 -05:00
Hai Ning
4bca2bd7db Merge pull request #217 from nishankgu/patch-1
Update README.md
2019-02-15 12:52:59 -05:00
Nishank
a927dbfa31 Update README.md 2019-02-14 14:22:05 -08:00
hning86
280c718f53 keras sample 2019-02-14 16:59:08 -05:00
Hai Ning
bf1ac2b26a Update NBSETUP.md 2019-02-14 11:02:01 -05:00
Roope Astala
954c2afbce Merge pull request #214 from rongduan-zhu/master
Updated Azure Databricks Automated ML notebook from master
2019-02-13 14:06:48 -05:00
Rongduan Zhu
fbf1ea5f1a updated notebook from latest master 2019-02-13 11:02:27 -08:00
Roope Astala
84b72d904b Merge pull request #210 from rastala/master
tutorial update
2019-02-11 16:07:47 -05:00
Roope Astala
82bb9fcac3 tutorial update 2019-02-11 16:07:10 -05:00
Roope Astala
5c6bbacd47 Merge pull request #209 from rastala/master
adb readme update
2019-02-11 15:52:34 -05:00
Roope Astala
90aaeea113 adb readme update 2019-02-11 15:51:50 -05:00
Roope Astala
eeab7284c9 Merge pull request #208 from rastala/master
few missing files
2019-02-11 15:48:22 -05:00
Roope Astala
02fd9b685c few missing files 2019-02-11 15:47:37 -05:00
hning86
d5c923b446 dockerfile updated 2019-02-11 15:21:56 -05:00
Roope Astala
f16bf27e26 Merge pull request #207 from rastala/master
release 1.0.15
2019-02-11 15:18:00 -05:00
Roope Astala
c7bec58593 update version 2019-02-11 15:17:40 -05:00
Roope Astala
cca3996eb4 release 1.0.15 2019-02-11 15:12:30 -05:00
Davide Fiocco
210efe022a Typo fix 2019-02-08 20:23:12 +01:00
Roope Astala
5fd14bac30 Merge pull request #199 from rastala/master
update automl databricks
2019-02-06 11:53:35 -05:00
Roope Astala
3fa409543b update automl databricks 2019-02-06 11:53:00 -05:00
Josée Martens
42f2822b61 Adding file to enable search performance tracking.
@rastala
2019-02-04 14:36:40 -06:00
Roope Astala
48afbe1cab Delete release.json 2019-01-31 16:07:08 -05:00
Roope Astala
1298c55dd4 Merge pull request #193 from rastala/master
fix broken link
2019-01-31 15:45:01 -05:00
Roope Astala
0aa1b248f4 fix broken link 2019-01-31 15:44:22 -05:00
Roope Astala
3012b8f5a8 Merge pull request #192 from rastala/master
add authentication notebook
2019-01-31 15:41:40 -05:00
Roope Astala
501c55bcaf add authentication notebook 2019-01-31 15:40:51 -05:00
hning86
1a38f50221 docker instructions 2019-01-31 15:16:36 -05:00
hning86
cc64be8d6f text update 2019-01-31 14:29:31 -05:00
hning86
a0127a2a64 dockerfile instruction 2019-01-31 11:46:06 -05:00
Hai Ning
7eb966bf79 Merge pull request #191 from Azure/dockerfiles
Dockerfiles
2019-01-31 10:54:55 -05:00
Roope Astala
9118f2c7ce Merge pull request #190 from rastala/master
fix NBSETUP
2019-01-31 09:33:17 -05:00
Roope Astala
0e3198f311 fix NBSETUP 2019-01-31 09:32:30 -05:00
hning86
0fdab91b97 dockefile reorg 2019-01-31 09:21:06 -05:00
hning86
b54be912d8 dockerfiles added 2019-01-30 17:04:18 -05:00
Roope Astala
3d0c7990ff Merge pull request #189 from rastala/master
update tutorial readme
2019-01-30 14:28:24 -05:00
Roope Astala
6e1ce29a94 Merge remote-tracking branch 'upstream/master' 2019-01-30 14:26:25 -05:00
Roope Astala
0d26c9986a update tutorials README 2019-01-30 14:25:17 -05:00
gison93
100ab10797 add pipeline validation 2019-01-29 14:50:00 +01:00
gison93
1307efe7bc fix typo
remove trailing \u00c2\u00a0 from variable and notebook_path
2019-01-29 14:34:07 +01:00
gison93
08d0b8cf08 fix typo
Bloband -> Blob and
2019-01-29 12:42:48 +01:00
Roope Astala
0514eee64b Merge pull request #182 from rastala/master
version 1.0.10
2019-01-28 18:10:20 -05:00
Roope Astala
4b6e34fdc0 Update train-within-notebook.ipynb 2019-01-28 18:09:36 -05:00
Roope Astala
e01216d85b Update configuration.ipynb 2019-01-28 18:08:41 -05:00
Roope Astala
b00f75edd8 version 1.0.10 2019-01-28 15:30:17 -05:00
Hai Ning
06aba388c6 Update azure-ml-with-nvidia-rapids.ipynb 2019-01-24 10:09:31 -05:00
Roope Astala
3018461dfc Merge pull request #176 from rastala/master
update tutorials
2019-01-22 14:25:28 -05:00
Roope Astala
0d91f2d697 update tutorials 2019-01-22 14:24:31 -05:00
Roope Astala
a14cb635f0 Merge pull request #175 from rastala/master
RAPIDS sample
2019-01-22 13:44:55 -05:00
Roope Astala
88f6a966cc RAPIDS sample 2019-01-22 13:32:59 -05:00
Hai Ning
4f76a844c6 Update README.md 2019-01-18 01:18:44 -05:00
Hai Ning
c1573ff949 Update NBSETUP.md 2019-01-18 01:15:53 -05:00
Hai Ning
d1b18b3771 Update NBSETUP.md 2019-01-18 01:09:13 -05:00
Roope Astala
e1a948f4cd Merge pull request #168 from rastala/master
version 1.0.8
2019-01-14 12:14:02 -08:00
Roope Astala
3ca40c0817 version 1.0.8 2019-01-14 15:13:30 -05:00
Roope Astala
f724cb4d9b Merge pull request #166 from jeff-shepherd/master
Fixed broken links in tutorials
2019-01-08 12:01:50 -08:00
Jeff Shepherd
094b4b3b13 Fixed broken links in tutorials 2019-01-08 11:58:03 -08:00
Roope Astala
d09942f521 Merge pull request #165 from rastala/master
databricks update
2019-01-08 09:24:11 -08:00
Roope Astala
0c9e527174 databricks update 2019-01-08 12:23:15 -05:00
Roope Astala
e2640e54da Merge pull request #160 from rastala/master
Create aml-pipelines-concept.png
2019-01-02 12:03:13 -08:00
Roope Astala
d348baf8a1 Create aml-pipelines-concept.png 2019-01-02 15:02:25 -05:00
Roope Astala
b41e11e30d Merge pull request #159 from jeff-shepherd/master
Removed databricks notebook link
2019-01-02 11:56:15 -08:00
Jeff Shepherd
c1aa951867 Removed databricks notebook link 2019-01-02 11:45:52 -08:00
Roope Astala
5fe5f06e07 Merge pull request #158 from rastala/master
Create Databricks_AMLSDK_1-4_6.dbc
2019-01-02 10:52:24 -08:00
Roope Astala
e8a09c49b1 Create Databricks_AMLSDK_1-4_6.dbc 2019-01-02 13:51:29 -05:00
Roope Astala
fb6a73a790 Merge pull request #145 from rastala/master
fix databricks
2018-12-20 13:11:17 -08:00
Roope Astala
c2968b6526 fix databricks 2018-12-20 16:10:27 -05:00
118 changed files with 34840 additions and 27088 deletions

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.10"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.10" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.15"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.15" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.2"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.2" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.6"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.6" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.8"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.8" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -1,10 +1,11 @@
# Notebook setup # Setting up environment
--- ---
To run the notebooks in this repository use one of these methods: To run the notebooks in this repository use one of following options.
## Use Azure Notebooks - Jupyter based notebooks in the Azure cloud ## **Option 1: Use Azure Notebooks**
Azure Notebooks is a hosted Jupyter-based notebook service in the Azure cloud. Azure Machine Learning Python SDK is already pre-installed in the Azure Notebooks `Python 3.6` kernel.
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks) 1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks)
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks [Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks
@@ -15,20 +16,91 @@ To run the notebooks in this repository use one of these methods:
![set kernel to Python 3.6](images/python36.png) ![set kernel to Python 3.6](images/python36.png)
## **Use your own notebook server** ## **Option 2: Use your own notebook server**
Video walkthrough: ### Quick installation
We recommend you create a Python virtual environment ([Miniconda](https://conda.io/miniconda.html) preferred but [virtualenv](https://virtualenv.pypa.io/en/latest/) works too) and install the SDK in it.
```sh
# install just the base SDK
pip install azureml-sdk
# clone the sample repoistory
git clone https://github.com/Azure/MachineLearningNotebooks.git
# below steps are optional
# install the base SDK and a Jupyter notebook server
pip install azureml-sdk[notebooks]
# install the data prep component
pip install azureml-dataprep
# install model explainability component
pip install azureml-sdk[explain]
# install automated ml components
pip install azureml-sdk[automl]
# install experimental features (not ready for production use)
pip install azureml-sdk[contrib]
```
Note the _extras_ (the keywords inside the square brackets) can be combined. For example:
```sh
# install base SDK, Jupyter notebook and automated ml components
pip install azureml-sdk[notebooks,automl]
```
### Full instructions
[Install the Azure Machine Learning SDK](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python)
Please make sure you start with the [Configuration](configuration.ipynb) notebook to create and connect to a workspace.
### Video walkthrough:
[![Get Started video](images/yt_cover.png)](https://youtu.be/VIsXeTuW3FU) [![Get Started video](images/yt_cover.png)](https://youtu.be/VIsXeTuW3FU)
1. Setup a Jupyter Notebook server and [install the Azure Machine Learning SDK](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-create-workspace-with-python)
1. Clone [this repository](https://aka.ms/aml-notebooks)
1. You may need to install other packages for specific notebook
- For example, to run the Azure Machine Learning Data Prep notebooks, install the extra dataprep SDK:
```bash
pip install azureml-dataprep
```
1. Start your notebook server ## **Option 3: Use Docker**
1. Follow the instructions in the [Configuration](configuration.ipynb) notebook to create and connect to a workspace
1. Open one of the sample notebooks You need to have Docker engine installed locally and running. Open a command line window and type the following command.
__Note:__ We use version `1.0.10` below as an exmaple, but you can replace that with any available version number you like.
```sh
# clone the sample repoistory
git clone https://github.com/Azure/MachineLearningNotebooks.git
# change current directory to the folder
# where Dockerfile of the specific SDK version is located.
cd MachineLearningNotebooks/Dockerfiles/1.0.10
# build a Docker image with the a name (azuremlsdk for example)
# and a version number tag (1.0.10 for example).
# this can take several minutes depending on your computer speed and network bandwidth.
docker build . -t azuremlsdk:1.0.10
# launch the built Docker container which also automatically starts
# a Jupyter server instance listening on port 8887 of the host machine
docker run -it -p 8887:8887 azuremlsdk:1.0.10
```
Now you can point your browser to http://localhost:8887. We recommend that you start from the `configuration.ipynb` notebook at the root directory.
If you need additional Azure ML SDK components, you can either modify the Docker files before you build the Docker images to add additional steps, or install them through command line in the live container after you build the Docker image. For example:
```sh
# install dataprep components
pip install azureml-dataprep
# install the core SDK and automated ml components
pip install azureml-sdk[automl]
# install the core SDK and model explainability component
pip install azureml-sdk[explain]
# install the core SDK and experimental components
pip install azureml-sdk[contrib]
```
Drag and Drop
The image will be downloaded by Fatkun

View File

@@ -1,40 +1,56 @@
# Azure Machine Learning service sample notebooks # Azure Machine Learning service example notebooks
--- This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK ![Azure ML workflow](https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/service/media/overview-what-is-azure-ml/aml.png)
which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK
allows you the choice of using local or cloud compute resources, while managing
and maintaining the complete data science workflow from the cloud.
* Read [instructions on setting up notebooks](./NBSETUP.md) to run these notebooks. ## Quick installation
```sh
pip install azureml-sdk
```
Read more detailed instructions on [how to set up your environment](./NBSETUP.md) using Azure Notebook service, your own Jupyter notebook server, or Docker.
* Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/). ## How to navigate and use the example notebooks?
You should always run the [Configuration](./configuration.ipynb) notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.
## Getting Started If you want to...
These examples will provide you with an effective way to get started using AML. Once you're familiar with * ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/img-classification-part2-deploy.ipynb).
some of the capabilities, explore the repository for specific topics. * ...prepare your data and do automated machine learning, start with regression tutorials: [Part 1 (Data Prep)](./tutorials/regression-part1-data-prep.ipynb) and [Part 2 (Automated ML)](./tutorials/regression-part2-automated-ml.ipynb).
* ...learn about experimentation and tracking run history, first [train within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) and [using logging APIs](./how-to-use-azureml/training/logging-api/logging-api.ipynb).
- [Configuration](./configuration.ipynb) configures your notebook library to easily connect to an * ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
Azure Machine Learning workspace, and sets up your workspace to be used by many of the other examples. You should * ...deploy models as a realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [register and manage models, and create Docker images](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), and [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
always run this first when setting up a notebook library on a new machine or in a new environment * ...deploy models as a batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), learn how to [register and manage models](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](./how-to-use-azureml/machine-learning-pipelines/pipeline-mpi-batch-prediction.ipynb).
- [Train in notebook](./how-to-use-azureml/training/train-within-notebook) shows how to create a model directly in a notebook while recording * ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) and [model data collection](./how-to-use-azureml/deployment/enable-data-collection-for-models-in-aks/enable-data-collection-for-models-in-aks.ipynb).
metrics and deploy that model to a test service
- [Train on remote](./how-to-use-azureml/training/train-on-remote-vm) takes the previous example and shows how to create the model on a cloud compute target
- [Production deploy to AKS](./how-to-use-azureml/deployment/production-deploy-to-aks) shows how to create a production grade inferencing webservice
## Tutorials ## Tutorials
The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs) The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs).
## How to use AML ## How to use Azure ML
The [How to use AML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets. - [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
- [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps - [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
- [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
- [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models - [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring - [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions - [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks - [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
---
## Documentation
* Quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
* [Python SDK reference](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py)
* Azure ML Data Prep SDK [overview](https://aka.ms/data-prep-sdk), [Python SDK reference](https://aka.ms/aml-data-prep-apiref), and [tutorials and how-tos](https://aka.ms/aml-data-prep-notebooks).
---
## Projects using Azure Machine Learning
Visit following repos to see projects contributed by Azure ML users:
- [Fine tune natural language processing models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)

View File

@@ -96,7 +96,7 @@
"source": [ "source": [
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"This notebook was created using version 1.0.6 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.0.17 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -0,0 +1,409 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# NVIDIA RAPIDS in Azure Machine Learning"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The [RAPIDS](https://www.developer.nvidia.com/rapids) suite of software libraries from NVIDIA enables the execution of end-to-end data science and analytics pipelines entirely on GPUs. In many machine learning projects, a significant portion of the model training time is spent in setting up the data; this stage of the process is known as Extraction, Transformation and Loading, or ETL. By using the DataFrame API for ETL and GPU-capable ML algorithms in RAPIDS, data preparation and training models can be done in GPU-accelerated end-to-end pipelines without incurring serialization costs between the pipeline stages. This notebook demonstrates how to use NVIDIA RAPIDS to prepare data and train model in Azure.\n",
" \n",
"In this notebook, we will do the following:\n",
" \n",
"* Create an Azure Machine Learning Workspace\n",
"* Create an AMLCompute target\n",
"* Use a script to process our data and train a model\n",
"* Obtain the data required to run this sample\n",
"* Create an AML run configuration to launch a machine learning job\n",
"* Run the script to prepare data for training and train the model\n",
" \n",
"Prerequisites:\n",
"* An Azure subscription to create a Machine Learning Workspace\n",
"* Familiarity with the Azure ML SDK (refer to [notebook samples](https://github.com/Azure/MachineLearningNotebooks))\n",
"* A Jupyter notebook environment with Azure Machine Learning SDK installed. Refer to instructions to [setup the environment](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Verify if Azure ML SDK is installed"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from azureml.core import Workspace, Experiment\n",
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core import ScriptRunConfig\n",
"from azureml.widgets import RunDetails"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Azure ML Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following step is optional if you already have a workspace. If you want to use an existing workspace, then\n",
"skip this workspace creation step and move on to the next step to load the workspace.\n",
" \n",
"<font color='red'>Important</font>: in the code cell below, be sure to set the correct values for the subscription_id, \n",
"resource_group, workspace_name, region before executing this code cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", \"<subscription_id>\")\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", \"<resource_group>\")\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", \"<workspace_name>\")\n",
"workspace_region = os.environ.get(\"WORKSPACE_REGION\", \"<region>\")\n",
"\n",
"ws = Workspace.create(workspace_name, subscription_id=subscription_id, resource_group=resource_group, location=workspace_region)\n",
"\n",
"# write config to a local directory for future use\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load existing Workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"# if a locally-saved configuration file for the workspace is not available, use the following to load workspace\n",
"# ws = Workspace(subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name)\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
"\n",
"scripts_folder = \"scripts_folder\"\n",
"\n",
"if not os.path.isdir(scripts_folder):\n",
" os.mkdir(scripts_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AML Compute Target"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Because NVIDIA RAPIDS requires P40 or V100 GPUs, the user needs to specify compute targets from one of [NC_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview) virtual machine types in Azure; these are the families of virtual machines in Azure that are provisioned with these GPUs.\n",
" \n",
"Pick one of the supported VM SKUs based on the number of GPUs you want to use for ETL and training in RAPIDS.\n",
" \n",
"The script in this notebook is implemented for single-machine scenarios. An example supporting multiple nodes will be published later."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"gpu_cluster_name = \"gpucluster\"\n",
"\n",
"if gpu_cluster_name in ws.compute_targets:\n",
" gpu_cluster = ws.compute_targets[gpu_cluster_name]\n",
" if gpu_cluster and type(gpu_cluster) is AmlCompute:\n",
" print('found compute target. just use it. ' + gpu_cluster_name)\n",
"else:\n",
" print(\"creating new cluster\")\n",
" # vm_size parameter below could be modified to one of the RAPIDS-supported VM types\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"Standard_NC6s_v2\", min_nodes=1, max_nodes = 1)\n",
"\n",
" # create the cluster\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, provisioning_config)\n",
" gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Script to process data and train model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The _process&#95;data.py_ script used in the step below is a slightly modified implementation of [RAPIDS E2E example](https://github.com/rapidsai/notebooks/blob/master/mortgage/E2E.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# copy process_data.py into the script folder\n",
"import shutil\n",
"shutil.copy('./process_data.py', os.path.join(scripts_folder, 'process_data.py'))\n",
"\n",
"with open(os.path.join(scripts_folder, './process_data.py'), 'r') as process_data_script:\n",
" print(process_data_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data required to run this sample"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample uses [Fannie Mae\u00e2\u20ac\u2122s Single-Family Loan Performance Data](http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html). Refer to the 'Available mortgage datasets' section in [instructions](https://rapidsai.github.io/demos/datasets/mortgage-data) to get sample data.\n",
"\n",
"Once you obtain access to the data, you will need to make this data available in an [Azure Machine Learning Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data), for use in this sample."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<font color='red'>Important</font>: The following step assumes the data is uploaded to the Workspace's default data store under a folder named 'mortgagedata2000_01'. Note that uploading data to the Workspace's default data store is not necessary and the data can be referenced from any datastore, e.g., from Azure Blob or File service, once it is added as a datastore to the workspace. The path_on_datastore parameter needs to be updated, depending on where the data is available. The directory where the data is available should have the following folder structure, as the process_data.py script expects this directory structure:\n",
"* _&lt;data directory>_/acq\n",
"* _&lt;data directory>_/perf\n",
"* _names.csv_\n",
"\n",
"The 'acq' and 'perf' refer to directories containing data files. The _&lt;data directory>_ is the path specified in _path&#95;on&#95;datastore_ parameter in the step below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds = ws.get_default_datastore()\n",
"\n",
"# download and uncompress data in a local directory before uploading to data store\n",
"# directory specified in src_dir parameter below should have the acq, perf directories with data and names.csv file\n",
"# ds.upload(src_dir='<local directory that has data>', target_path='mortgagedata2000_01', overwrite=True, show_progress=True)\n",
"\n",
"# data already uploaded to the datastore\n",
"data_ref = DataReference(data_reference_name='data', datastore=ds, path_on_datastore='mortgagedata2000_01')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AML run configuration to launch a machine learning job"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"AML allows the option of using existing Docker images with prebuilt conda environments. The following step use an existing image from [Docker Hub](https://hub.docker.com/r/rapidsai/rapidsai/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_config = RunConfiguration()\n",
"run_config.framework = 'python'\n",
"run_config.environment.python.user_managed_dependencies = True\n",
"# use conda environment named 'rapids' available in the Docker image\n",
"# this conda environment does not include azureml-defaults package that is required for using AML functionality like metrics tracking, model management etc.\n",
"run_config.environment.python.interpreter_path = '/conda/envs/rapids/bin/python'\n",
"run_config.target = gpu_cluster_name\n",
"run_config.environment.docker.enabled = True\n",
"run_config.environment.docker.gpu_support = True\n",
"# if registry is not mentioned the image is pulled from Docker Hub\n",
"run_config.environment.docker.base_image = \"rapidsai/rapidsai:cuda9.2_ubuntu16.04_root\"\n",
"run_config.environment.spark.precache_packages = False\n",
"run_config.data_references={'data':data_ref.to_config()}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Wrapper function to submit Azure Machine Learning experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# parameter cpu_predictor indicates if training should be done on CPU. If set to true, GPUs are used *only* for ETL and *not* for training\n",
"# parameter num_gpu indicates number of GPUs to use among the GPUs available in the VM for ETL and if cpu_predictor is false, for training as well \n",
"def run_rapids_experiment(cpu_training, gpu_count):\n",
" # any value between 1-4 is allowed here depending the type of VMs available in gpu_cluster\n",
" if gpu_count not in [1, 2, 3, 4]:\n",
" raise Exception('Value specified for the number of GPUs to use {0} is invalid'.format(gpu_count))\n",
"\n",
" # following data partition mapping is empirical (specific to GPUs used and current data partitioning scheme) and may need to be tweaked\n",
" gpu_count_data_partition_mapping = {1: 2, 2: 4, 3: 5, 4: 7}\n",
" part_count = gpu_count_data_partition_mapping[gpu_count]\n",
"\n",
" end_year = 2000\n",
" if gpu_count > 2:\n",
" end_year = 2001 # use more data with more GPUs\n",
"\n",
" src = ScriptRunConfig(source_directory=scripts_folder, \n",
" script='process_data.py', \n",
" arguments = ['--num_gpu', gpu_count, '--data_dir', str(data_ref),\n",
" '--part_count', part_count, '--end_year', end_year,\n",
" '--cpu_predictor', cpu_training\n",
" ],\n",
" run_config=run_config\n",
" )\n",
"\n",
" exp = Experiment(ws, 'rapidstest')\n",
" run = exp.submit(config=src)\n",
" RunDetails(run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit experiment (ETL & training on GPU)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cpu_predictor = False\n",
"# the value for num_gpu should be less than or equal to the number of GPUs available in the VM\n",
"num_gpu = 1 \n",
"# train using CPU, use GPU for both ETL and training\n",
"run_rapids_experiment(cpu_predictor, num_gpu)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit experiment (ETL on GPU, training on CPU)\n",
"\n",
"To observe performance difference between GPU-accelerated RAPIDS based training with CPU-only training, set 'cpu_predictor' predictor to 'True' and rerun the experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cpu_predictor = True\n",
"# the value for num_gpu should be less than or equal to the number of GPUs available in the VM\n",
"num_gpu = 1\n",
"# train using CPU, use GPU for ETL\n",
"run_rapids_experiment(cpu_predictor, num_gpu)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete cluster"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# delete the cluster\n",
"# gpu_cluster.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "ksivas"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,500 @@
# License Info: https://github.com/rapidsai/notebooks/blob/master/LICENSE
import numpy as np
import datetime
import dask_xgboost as dxgb_gpu
import dask
import dask_cudf
from dask.delayed import delayed
from dask.distributed import Client, wait
import xgboost as xgb
import cudf
from cudf.dataframe import DataFrame
from collections import OrderedDict
import gc
from glob import glob
import os
import argparse
parser = argparse.ArgumentParser("rapidssample")
parser.add_argument("--data_dir", type=str, help="location of data")
parser.add_argument("--num_gpu", type=int, help="Number of GPUs to use", default=1)
parser.add_argument("--part_count", type=int, help="Number of data files to train against", default=2)
parser.add_argument("--end_year", type=int, help="Year to end the data load", default=2000)
parser.add_argument("--cpu_predictor", type=str, help="Flag to use CPU for prediction", default='False')
parser.add_argument('-f', type=str, default='') # added for notebook execution scenarios
args = parser.parse_args()
data_dir = args.data_dir
num_gpu = args.num_gpu
part_count = args.part_count
end_year = args.end_year
cpu_predictor = args.cpu_predictor.lower() in ('yes', 'true', 't', 'y', '1')
print('data_dir = {0}'.format(data_dir))
print('num_gpu = {0}'.format(num_gpu))
print('part_count = {0}'.format(part_count))
part_count = part_count + 1 # adding one because the usage below is not inclusive
print('end_year = {0}'.format(end_year))
print('cpu_predictor = {0}'.format(cpu_predictor))
import subprocess
cmd = "hostname --all-ip-addresses"
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
IPADDR = str(output.decode()).split()[0]
print('IPADDR is {0}'.format(IPADDR))
cmd = "/rapids/notebooks/utils/dask-setup.sh 0"
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
cmd = "/rapids/notebooks/utils/dask-setup.sh rapids " + str(num_gpu) + " 8786 8787 8790 " + str(IPADDR) + " MASTER"
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
print(output.decode())
import dask
from dask.delayed import delayed
from dask.distributed import Client, wait
_client = IPADDR + str(":8786")
client = dask.distributed.Client(_client)
def initialize_rmm_pool():
from librmm_cffi import librmm_config as rmm_cfg
rmm_cfg.use_pool_allocator = True
#rmm_cfg.initial_pool_size = 2<<30 # set to 2GiB. Default is 1/2 total GPU memory
import cudf
return cudf._gdf.rmm_initialize()
def initialize_rmm_no_pool():
from librmm_cffi import librmm_config as rmm_cfg
rmm_cfg.use_pool_allocator = False
import cudf
return cudf._gdf.rmm_initialize()
def run_dask_task(func, **kwargs):
task = func(**kwargs)
return task
def process_quarter_gpu(year=2000, quarter=1, perf_file=""):
ml_arrays = run_dask_task(delayed(run_gpu_workflow),
quarter=quarter,
year=year,
perf_file=perf_file)
return client.compute(ml_arrays,
optimize_graph=False,
fifo_timeout="0ms"
)
def null_workaround(df, **kwargs):
for column, data_type in df.dtypes.items():
if str(data_type) == "category":
df[column] = df[column].astype('int32').fillna(-1)
if str(data_type) in ['int8', 'int16', 'int32', 'int64', 'float32', 'float64']:
df[column] = df[column].fillna(-1)
return df
def run_gpu_workflow(quarter=1, year=2000, perf_file="", **kwargs):
names = gpu_load_names()
acq_gdf = gpu_load_acquisition_csv(acquisition_path= acq_data_path + "/Acquisition_"
+ str(year) + "Q" + str(quarter) + ".txt")
acq_gdf = acq_gdf.merge(names, how='left', on=['seller_name'])
acq_gdf.drop_column('seller_name')
acq_gdf['seller_name'] = acq_gdf['new']
acq_gdf.drop_column('new')
perf_df_tmp = gpu_load_performance_csv(perf_file)
gdf = perf_df_tmp
everdf = create_ever_features(gdf)
delinq_merge = create_delinq_features(gdf)
everdf = join_ever_delinq_features(everdf, delinq_merge)
del(delinq_merge)
joined_df = create_joined_df(gdf, everdf)
testdf = create_12_mon_features(joined_df)
joined_df = combine_joined_12_mon(joined_df, testdf)
del(testdf)
perf_df = final_performance_delinquency(gdf, joined_df)
del(gdf, joined_df)
final_gdf = join_perf_acq_gdfs(perf_df, acq_gdf)
del(perf_df)
del(acq_gdf)
final_gdf = last_mile_cleaning(final_gdf)
return final_gdf
def gpu_load_performance_csv(performance_path, **kwargs):
""" Loads performance data
Returns
-------
GPU DataFrame
"""
cols = [
"loan_id", "monthly_reporting_period", "servicer", "interest_rate", "current_actual_upb",
"loan_age", "remaining_months_to_legal_maturity", "adj_remaining_months_to_maturity",
"maturity_date", "msa", "current_loan_delinquency_status", "mod_flag", "zero_balance_code",
"zero_balance_effective_date", "last_paid_installment_date", "foreclosed_after",
"disposition_date", "foreclosure_costs", "prop_preservation_and_repair_costs",
"asset_recovery_costs", "misc_holding_expenses", "holding_taxes", "net_sale_proceeds",
"credit_enhancement_proceeds", "repurchase_make_whole_proceeds", "other_foreclosure_proceeds",
"non_interest_bearing_upb", "principal_forgiveness_upb", "repurchase_make_whole_proceeds_flag",
"foreclosure_principal_write_off_amount", "servicing_activity_indicator"
]
dtypes = OrderedDict([
("loan_id", "int64"),
("monthly_reporting_period", "date"),
("servicer", "category"),
("interest_rate", "float64"),
("current_actual_upb", "float64"),
("loan_age", "float64"),
("remaining_months_to_legal_maturity", "float64"),
("adj_remaining_months_to_maturity", "float64"),
("maturity_date", "date"),
("msa", "float64"),
("current_loan_delinquency_status", "int32"),
("mod_flag", "category"),
("zero_balance_code", "category"),
("zero_balance_effective_date", "date"),
("last_paid_installment_date", "date"),
("foreclosed_after", "date"),
("disposition_date", "date"),
("foreclosure_costs", "float64"),
("prop_preservation_and_repair_costs", "float64"),
("asset_recovery_costs", "float64"),
("misc_holding_expenses", "float64"),
("holding_taxes", "float64"),
("net_sale_proceeds", "float64"),
("credit_enhancement_proceeds", "float64"),
("repurchase_make_whole_proceeds", "float64"),
("other_foreclosure_proceeds", "float64"),
("non_interest_bearing_upb", "float64"),
("principal_forgiveness_upb", "float64"),
("repurchase_make_whole_proceeds_flag", "category"),
("foreclosure_principal_write_off_amount", "float64"),
("servicing_activity_indicator", "category")
])
print(performance_path)
return cudf.read_csv(performance_path, names=cols, delimiter='|', dtype=list(dtypes.values()), skiprows=1)
def gpu_load_acquisition_csv(acquisition_path, **kwargs):
""" Loads acquisition data
Returns
-------
GPU DataFrame
"""
cols = [
'loan_id', 'orig_channel', 'seller_name', 'orig_interest_rate', 'orig_upb', 'orig_loan_term',
'orig_date', 'first_pay_date', 'orig_ltv', 'orig_cltv', 'num_borrowers', 'dti', 'borrower_credit_score',
'first_home_buyer', 'loan_purpose', 'property_type', 'num_units', 'occupancy_status', 'property_state',
'zip', 'mortgage_insurance_percent', 'product_type', 'coborrow_credit_score', 'mortgage_insurance_type',
'relocation_mortgage_indicator'
]
dtypes = OrderedDict([
("loan_id", "int64"),
("orig_channel", "category"),
("seller_name", "category"),
("orig_interest_rate", "float64"),
("orig_upb", "int64"),
("orig_loan_term", "int64"),
("orig_date", "date"),
("first_pay_date", "date"),
("orig_ltv", "float64"),
("orig_cltv", "float64"),
("num_borrowers", "float64"),
("dti", "float64"),
("borrower_credit_score", "float64"),
("first_home_buyer", "category"),
("loan_purpose", "category"),
("property_type", "category"),
("num_units", "int64"),
("occupancy_status", "category"),
("property_state", "category"),
("zip", "int64"),
("mortgage_insurance_percent", "float64"),
("product_type", "category"),
("coborrow_credit_score", "float64"),
("mortgage_insurance_type", "float64"),
("relocation_mortgage_indicator", "category")
])
print(acquisition_path)
return cudf.read_csv(acquisition_path, names=cols, delimiter='|', dtype=list(dtypes.values()), skiprows=1)
def gpu_load_names(**kwargs):
""" Loads names used for renaming the banks
Returns
-------
GPU DataFrame
"""
cols = [
'seller_name', 'new'
]
dtypes = OrderedDict([
("seller_name", "category"),
("new", "category"),
])
return cudf.read_csv(col_names_path, names=cols, delimiter='|', dtype=list(dtypes.values()), skiprows=1)
def create_ever_features(gdf, **kwargs):
everdf = gdf[['loan_id', 'current_loan_delinquency_status']]
everdf = everdf.groupby('loan_id', method='hash').max()
del(gdf)
everdf['ever_30'] = (everdf['max_current_loan_delinquency_status'] >= 1).astype('int8')
everdf['ever_90'] = (everdf['max_current_loan_delinquency_status'] >= 3).astype('int8')
everdf['ever_180'] = (everdf['max_current_loan_delinquency_status'] >= 6).astype('int8')
everdf.drop_column('max_current_loan_delinquency_status')
return everdf
def create_delinq_features(gdf, **kwargs):
delinq_gdf = gdf[['loan_id', 'monthly_reporting_period', 'current_loan_delinquency_status']]
del(gdf)
delinq_30 = delinq_gdf.query('current_loan_delinquency_status >= 1')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
delinq_30['delinquency_30'] = delinq_30['min_monthly_reporting_period']
delinq_30.drop_column('min_monthly_reporting_period')
delinq_90 = delinq_gdf.query('current_loan_delinquency_status >= 3')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
delinq_90['delinquency_90'] = delinq_90['min_monthly_reporting_period']
delinq_90.drop_column('min_monthly_reporting_period')
delinq_180 = delinq_gdf.query('current_loan_delinquency_status >= 6')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
delinq_180['delinquency_180'] = delinq_180['min_monthly_reporting_period']
delinq_180.drop_column('min_monthly_reporting_period')
del(delinq_gdf)
delinq_merge = delinq_30.merge(delinq_90, how='left', on=['loan_id'], type='hash')
delinq_merge['delinquency_90'] = delinq_merge['delinquency_90'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
delinq_merge = delinq_merge.merge(delinq_180, how='left', on=['loan_id'], type='hash')
delinq_merge['delinquency_180'] = delinq_merge['delinquency_180'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
del(delinq_30)
del(delinq_90)
del(delinq_180)
return delinq_merge
def join_ever_delinq_features(everdf_tmp, delinq_merge, **kwargs):
everdf = everdf_tmp.merge(delinq_merge, on=['loan_id'], how='left', type='hash')
del(everdf_tmp)
del(delinq_merge)
everdf['delinquency_30'] = everdf['delinquency_30'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
everdf['delinquency_90'] = everdf['delinquency_90'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
everdf['delinquency_180'] = everdf['delinquency_180'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
return everdf
def create_joined_df(gdf, everdf, **kwargs):
test = gdf[['loan_id', 'monthly_reporting_period', 'current_loan_delinquency_status', 'current_actual_upb']]
del(gdf)
test['timestamp'] = test['monthly_reporting_period']
test.drop_column('monthly_reporting_period')
test['timestamp_month'] = test['timestamp'].dt.month
test['timestamp_year'] = test['timestamp'].dt.year
test['delinquency_12'] = test['current_loan_delinquency_status']
test.drop_column('current_loan_delinquency_status')
test['upb_12'] = test['current_actual_upb']
test.drop_column('current_actual_upb')
test['upb_12'] = test['upb_12'].fillna(999999999)
test['delinquency_12'] = test['delinquency_12'].fillna(-1)
joined_df = test.merge(everdf, how='left', on=['loan_id'], type='hash')
del(everdf)
del(test)
joined_df['ever_30'] = joined_df['ever_30'].fillna(-1)
joined_df['ever_90'] = joined_df['ever_90'].fillna(-1)
joined_df['ever_180'] = joined_df['ever_180'].fillna(-1)
joined_df['delinquency_30'] = joined_df['delinquency_30'].fillna(-1)
joined_df['delinquency_90'] = joined_df['delinquency_90'].fillna(-1)
joined_df['delinquency_180'] = joined_df['delinquency_180'].fillna(-1)
joined_df['timestamp_year'] = joined_df['timestamp_year'].astype('int32')
joined_df['timestamp_month'] = joined_df['timestamp_month'].astype('int32')
return joined_df
def create_12_mon_features(joined_df, **kwargs):
testdfs = []
n_months = 12
for y in range(1, n_months + 1):
tmpdf = joined_df[['loan_id', 'timestamp_year', 'timestamp_month', 'delinquency_12', 'upb_12']]
tmpdf['josh_months'] = tmpdf['timestamp_year'] * 12 + tmpdf['timestamp_month']
tmpdf['josh_mody_n'] = ((tmpdf['josh_months'].astype('float64') - 24000 - y) / 12).floor()
tmpdf = tmpdf.groupby(['loan_id', 'josh_mody_n'], method='hash').agg({'delinquency_12': 'max','upb_12': 'min'})
tmpdf['delinquency_12'] = (tmpdf['max_delinquency_12']>3).astype('int32')
tmpdf['delinquency_12'] +=(tmpdf['min_upb_12']==0).astype('int32')
tmpdf.drop_column('max_delinquency_12')
tmpdf['upb_12'] = tmpdf['min_upb_12']
tmpdf.drop_column('min_upb_12')
tmpdf['timestamp_year'] = (((tmpdf['josh_mody_n'] * n_months) + 24000 + (y - 1)) / 12).floor().astype('int16')
tmpdf['timestamp_month'] = np.int8(y)
tmpdf.drop_column('josh_mody_n')
testdfs.append(tmpdf)
del(tmpdf)
del(joined_df)
return cudf.concat(testdfs)
def combine_joined_12_mon(joined_df, testdf, **kwargs):
joined_df.drop_column('delinquency_12')
joined_df.drop_column('upb_12')
joined_df['timestamp_year'] = joined_df['timestamp_year'].astype('int16')
joined_df['timestamp_month'] = joined_df['timestamp_month'].astype('int8')
return joined_df.merge(testdf, how='left', on=['loan_id', 'timestamp_year', 'timestamp_month'], type='hash')
def final_performance_delinquency(gdf, joined_df, **kwargs):
merged = null_workaround(gdf)
joined_df = null_workaround(joined_df)
merged['timestamp_month'] = merged['monthly_reporting_period'].dt.month
merged['timestamp_month'] = merged['timestamp_month'].astype('int8')
merged['timestamp_year'] = merged['monthly_reporting_period'].dt.year
merged['timestamp_year'] = merged['timestamp_year'].astype('int16')
merged = merged.merge(joined_df, how='left', on=['loan_id', 'timestamp_year', 'timestamp_month'], type='hash')
merged.drop_column('timestamp_year')
merged.drop_column('timestamp_month')
return merged
def join_perf_acq_gdfs(perf, acq, **kwargs):
perf = null_workaround(perf)
acq = null_workaround(acq)
return perf.merge(acq, how='left', on=['loan_id'], type='hash')
def last_mile_cleaning(df, **kwargs):
drop_list = [
'loan_id', 'orig_date', 'first_pay_date', 'seller_name',
'monthly_reporting_period', 'last_paid_installment_date', 'maturity_date', 'ever_30', 'ever_90', 'ever_180',
'delinquency_30', 'delinquency_90', 'delinquency_180', 'upb_12',
'zero_balance_effective_date','foreclosed_after', 'disposition_date','timestamp'
]
for column in drop_list:
df.drop_column(column)
for col, dtype in df.dtypes.iteritems():
if str(dtype)=='category':
df[col] = df[col].cat.codes
df[col] = df[col].astype('float32')
df['delinquency_12'] = df['delinquency_12'] > 0
df['delinquency_12'] = df['delinquency_12'].fillna(False).astype('int32')
for column in df.columns:
df[column] = df[column].fillna(-1)
return df.to_arrow(index=False)
# to download data for this notebook, visit https://rapidsai.github.io/demos/datasets/mortgage-data and update the following paths accordingly
acq_data_path = "{0}/acq".format(data_dir) #"/rapids/data/mortgage/acq"
perf_data_path = "{0}/perf".format(data_dir) #"/rapids/data/mortgage/perf"
col_names_path = "{0}/names.csv".format(data_dir) # "/rapids/data/mortgage/names.csv"
start_year = 2000
#end_year = 2000 # end_year is inclusive -- converted to parameter
#part_count = 2 # the number of data files to train against -- converted to parameter
client.run(initialize_rmm_pool)
# NOTE: The ETL calculates additional features which are then dropped before creating the XGBoost DMatrix.
# This can be optimized to avoid calculating the dropped features.
print("Reading ...")
t1 = datetime.datetime.now()
gpu_dfs = []
gpu_time = 0
quarter = 1
year = start_year
count = 0
while year <= end_year:
for file in glob(os.path.join(perf_data_path + "/Performance_" + str(year) + "Q" + str(quarter) + "*")):
if count < part_count:
gpu_dfs.append(process_quarter_gpu(year=year, quarter=quarter, perf_file=file))
count += 1
print('file: {0}'.format(file))
print('count: {0}'.format(count))
quarter += 1
if quarter == 5:
year += 1
quarter = 1
wait(gpu_dfs)
t2 = datetime.datetime.now()
print("Reading time ...")
print(t2-t1)
print('len(gpu_dfs) is {0}'.format(len(gpu_dfs)))
client.run(cudf._gdf.rmm_finalize)
client.run(initialize_rmm_no_pool)
dxgb_gpu_params = {
'nround': 100,
'max_depth': 8,
'max_leaves': 2**8,
'alpha': 0.9,
'eta': 0.1,
'gamma': 0.1,
'learning_rate': 0.1,
'subsample': 1,
'reg_lambda': 1,
'scale_pos_weight': 2,
'min_child_weight': 30,
'tree_method': 'gpu_hist',
'n_gpus': 1,
'distributed_dask': True,
'loss': 'ls',
'objective': 'gpu:reg:linear',
'max_features': 'auto',
'criterion': 'friedman_mse',
'grow_policy': 'lossguide',
'verbose': True
}
if cpu_predictor:
print('Training using CPUs')
dxgb_gpu_params['predictor'] = 'cpu_predictor'
dxgb_gpu_params['tree_method'] = 'hist'
dxgb_gpu_params['objective'] = 'reg:linear'
else:
print('Training using GPUs')
print('Training parameters are {0}'.format(dxgb_gpu_params))
gpu_dfs = [delayed(DataFrame.from_arrow)(gpu_df) for gpu_df in gpu_dfs[:part_count]]
gpu_dfs = [gpu_df for gpu_df in gpu_dfs]
wait(gpu_dfs)
tmp_map = [(gpu_df, list(client.who_has(gpu_df).values())[0]) for gpu_df in gpu_dfs]
new_map = {}
for key, value in tmp_map:
if value not in new_map:
new_map[value] = [key]
else:
new_map[value].append(key)
del(tmp_map)
gpu_dfs = []
for list_delayed in new_map.values():
gpu_dfs.append(delayed(cudf.concat)(list_delayed))
del(new_map)
gpu_dfs = [(gpu_df[['delinquency_12']], gpu_df[delayed(list)(gpu_df.columns.difference(['delinquency_12']))]) for gpu_df in gpu_dfs]
gpu_dfs = [(gpu_df[0].persist(), gpu_df[1].persist()) for gpu_df in gpu_dfs]
gpu_dfs = [dask.delayed(xgb.DMatrix)(gpu_df[1], gpu_df[0]) for gpu_df in gpu_dfs]
gpu_dfs = [gpu_df.persist() for gpu_df in gpu_dfs]
gc.collect()
labels = None
print('str(gpu_dfs) is {0}'.format(str(gpu_dfs)))
wait(gpu_dfs)
t1 = datetime.datetime.now()
bst = dxgb_gpu.train(client, dxgb_gpu_params, gpu_dfs, labels, num_boost_round=dxgb_gpu_params['nround'])
t2 = datetime.datetime.now()
print("Training time ...")
print(t2-t1)
print('str(bst) is {0}'.format(str(bst)))
print('Exiting script')

View File

@@ -0,0 +1 @@
google-site-verification: googleade5d7141b3f2910.html

View File

@@ -4,8 +4,9 @@ Learn how to use Azure Machine Learning services for experimentation and model m
As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order. As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order.
* [train-within-notebook](./training/train-within-notebook): Train a model hile tracking run history, and learn how to deploy the model as web service to Azure Container Instance. * [train-within-notebook](./training/train-within-notebook): Train a model hile tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
* [train-on-local](./training/train-on-local): Learn how to submit a run and use Azure ML managed run configuration. * [train-on-local](./training/train-on-local): Learn how to submit a run to local computer and use Azure ML managed run configuration.
* [train-on-amlcompute](./training/train-on-amlcompute): Use a 1-n node Azure ML managed compute cluster for remote runs on Azure CPU or GPU infrastructure.
* [train-on-remote-vm](./training/train-on-remote-vm): Use Data Science Virtual Machine as a target for remote runs. * [train-on-remote-vm](./training/train-on-remote-vm): Use Data Science Virtual Machine as a target for remote runs.
* [logging-api](./training/logging-api): Learn about the details of logging metrics to run history. * [logging-api](./training/logging-api): Learn about the details of logging metrics to run history.
* [register-model-create-image-deploy-service](./deployment/register-model-create-image-deploy-service): Learn about the details of model management. * [register-model-create-image-deploy-service](./deployment/register-model-create-image-deploy-service): Learn about the details of model management.
@@ -13,4 +14,4 @@ As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) not
* [enable-data-collection-for-models-in-aks](./deployment/enable-data-collection-for-models-in-aks) Learn about data collection APIs for deployed model. * [enable-data-collection-for-models-in-aks](./deployment/enable-data-collection-for-models-in-aks) Learn about data collection APIs for deployed model.
* [enable-app-insights-in-production-service](./deployment/enable-app-insights-in-production-service) Learn how to use App Insights with production web service. * [enable-app-insights-in-production-service](./deployment/enable-app-insights-in-production-service) Learn how to use App Insights with production web service.
Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/). Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).

View File

@@ -25,7 +25,7 @@ Below are the three execution environments supported by AutoML.
1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks) 1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks)
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks. [Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks.
1. Follow the instructions in the [configuration](configuration.ipynb) notebook to create and connect to a workspace. 1. Follow the instructions in the [configuration](../../configuration.ipynb) notebook to create and connect to a workspace.
1. Open one of the sample notebooks. 1. Open one of the sample notebooks.
<a name="databricks"></a> <a name="databricks"></a>
@@ -35,7 +35,7 @@ Below are the three execution environments supported by AutoML.
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook. **NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl_databricks]** as a PyPi library in Azure Databricks workspace. - Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl_databricks]** as a PyPi library in Azure Databricks workspace.
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks). - You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks).
- Download the sample notebook AutoML_Databricks_local_06.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace. - Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace.
- Attach the notebook to the cluster. - Attach the notebook to the cluster.
<a name="localconda"></a> <a name="localconda"></a>
@@ -90,7 +90,7 @@ bash automl_setup_linux.sh
``` ```
### 4. Running configuration.ipynb ### 4. Running configuration.ipynb
- Before running any samples you next need to run the configuration notebook. Click on configuration.ipynb notebook - Before running any samples you next need to run the configuration notebook. Click on [configuration](../../configuration.ipynb) notebook
- Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (*instructions in notebook*) - Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (*instructions in notebook*)
### 5. Running Samples ### 5. Running Samples
@@ -99,9 +99,6 @@ bash automl_setup_linux.sh
<a name="samples"></a> <a name="samples"></a>
# Automated ML SDK Sample Notebooks # Automated ML SDK Sample Notebooks
- [configuration.ipynb](configuration.ipynb)
- Create new Azure ML Workspace
- Save Workspace configuration file
- [auto-ml-classification.ipynb](classification/auto-ml-classification.ipynb) - [auto-ml-classification.ipynb](classification/auto-ml-classification.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits) - Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
@@ -169,16 +166,15 @@ bash automl_setup_linux.sh
- How to specifying sample_weight - How to specifying sample_weight
- The difference that it makes to test results - The difference that it makes to test results
- [auto-ml-subsampling-local.ipynb](subsampling/auto-ml-subsampling-local.ipynb)
- How to enable subsampling
- [auto-ml-dataprep.ipynb](dataprep/auto-ml-dataprep.ipynb) - [auto-ml-dataprep.ipynb](dataprep/auto-ml-dataprep.ipynb)
- Using DataPrep for reading data - Using DataPrep for reading data
- [auto-ml-dataprep-remote-execution.ipynb](dataprep-remote-execution/auto-ml-dataprep-remote-execution.ipynb) - [auto-ml-dataprep-remote-execution.ipynb](dataprep-remote-execution/auto-ml-dataprep-remote-execution.ipynb)
- Using DataPrep for reading data with remote execution - Using DataPrep for reading data with remote execution
- [auto-ml-classification-local-azuredatabricks.ipynb](classification-local-azuredatabricks/auto-ml-classification-local-azuredatabricks.ipynb)
- Dataset: scikit learn's [digit dataset](https://innovate.burningman.org/datasets-page/)
- Example of using AutoML for classification using Azure Databricks as the platform for training
- [auto-ml-classification-with-whitelisting.ipynb](classification-with-whitelisting/auto-ml-classification-with-whitelisting.ipynb) - [auto-ml-classification-with-whitelisting.ipynb](classification-with-whitelisting/auto-ml-classification-with-whitelisting.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits) - Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification with whitelisting tensorflow models. - Simple example of using Auto ML for classification with whitelisting tensorflow models.
@@ -233,6 +229,9 @@ If a sample notebook fails with an error that property, method or library does n
1) Check that you have selected correct kernel in jupyter notebook. The kernel is displayed in the top right of the notebook page. It can be changed using the `Kernel | Change Kernel` menu option. For Azure Notebooks, it should be `Python 3.6`. For local conda environments, it should be the conda envioronment name that you specified in automl_setup. The default is azure_automl. Note that the kernel is saved as part of the notebook. So, if you switch to a new conda environment, you will have to select the new kernel in the notebook. 1) Check that you have selected correct kernel in jupyter notebook. The kernel is displayed in the top right of the notebook page. It can be changed using the `Kernel | Change Kernel` menu option. For Azure Notebooks, it should be `Python 3.6`. For local conda environments, it should be the conda envioronment name that you specified in automl_setup. The default is azure_automl. Note that the kernel is saved as part of the notebook. So, if you switch to a new conda environment, you will have to select the new kernel in the notebook.
2) Check that the notebook is for the SDK version that you are using. You can check the SDK version by executing `azureml.core.VERSION` in a jupyter notebook cell. You can download previous version of the sample notebooks from GitHub by clicking the `Branch` button, selecting the `Tags` tab and then selecting the version. 2) Check that the notebook is for the SDK version that you are using. You can check the SDK version by executing `azureml.core.VERSION` in a jupyter notebook cell. You can download previous version of the sample notebooks from GitHub by clicking the `Branch` button, selecting the `Tags` tab and then selecting the version.
## Numpy import fails on Windows
Some Windows environments see an error loading numpy with the latest Python version 3.6.8. If you see this issue, try with Python version 3.6.7.
## Remote run: DsvmCompute.create fails ## Remote run: DsvmCompute.create fails
There are several reasons why the DsvmCompute.create can fail. The reason is usually in the error message but you have to look at the end of the error message for the detailed reason. Some common reasons are: There are several reasons why the DsvmCompute.create can fail. The reason is usually in the error message but you have to look at the end of the error message for the detailed reason. Some common reasons are:
1) `Compute name is invalid, it should start with a letter, be between 2 and 16 character, and only include letters (a-zA-Z), numbers (0-9) and \'-\'.` Note that underscore is not allowed in the name. 1) `Compute name is invalid, it should start with a letter, be between 2 and 16 character, and only include letters (a-zA-Z), numbers (0-9) and \'-\'.` Note that underscore is not allowed in the name.

View File

@@ -2,7 +2,7 @@ name: azure_automl
dependencies: dependencies:
# The python interpreter version. # The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later. # Currently Azure ML only supports 3.5.2 and later.
- python=3.6 - python>=3.5.2,<3.6.8
- nb_conda - nb_conda
- matplotlib==2.1.0 - matplotlib==2.1.0
- numpy>=1.11.0,<1.15.0 - numpy>=1.11.0,<1.15.0
@@ -12,20 +12,9 @@ dependencies:
- scikit-learn>=0.18.0,<=0.19.1 - scikit-learn>=0.18.0,<=0.19.1
- pandas>=0.22.0,<0.23.0 - pandas>=0.22.0,<0.23.0
- tensorflow>=1.12.0 - tensorflow>=1.12.0
- py-xgboost<=0.80
# Required for azuremlftk
- dill
- pyodbc
- statsmodels
- numexpr
- keras
- distributed>=1.21.5,<1.24
- pip: - pip:
# Required for azuremlftk
- https://azuremlpackages.blob.core.windows.net/forecasting/azuremlftk-0.1.18323.5a1-py3-none-any.whl
# Required packages for AzureML execution, history, and data preparation. # Required packages for AzureML execution, history, and data preparation.
- azureml-sdk[automl,notebooks,explain] - azureml-sdk[automl,notebooks,explain]
- pandas_ml - pandas_ml

View File

@@ -2,7 +2,7 @@ name: azure_automl
dependencies: dependencies:
# The python interpreter version. # The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later. # Currently Azure ML only supports 3.5.2 and later.
- python=3.6 - python>=3.5.2,<3.6.8
- nb_conda - nb_conda
- matplotlib==2.1.0 - matplotlib==2.1.0
- numpy>=1.15.3 - numpy>=1.15.3
@@ -12,20 +12,9 @@ dependencies:
- scikit-learn>=0.18.0,<=0.19.1 - scikit-learn>=0.18.0,<=0.19.1
- pandas>=0.22.0,<0.23.0 - pandas>=0.22.0,<0.23.0
- tensorflow>=1.12.0 - tensorflow>=1.12.0
- py-xgboost<=0.80
# Required for azuremlftk
- dill
- pyodbc
- statsmodels
- numexpr
- keras
- distributed>=1.21.5,<1.24
- pip: - pip:
# Required for azuremlftk
- https://azuremlpackages.blob.core.windows.net/forecasting/azuremlftk-0.1.18323.5a1-py3-none-any.whl
# Required packages for AzureML execution, history, and data preparation. # Required packages for AzureML execution, history, and data preparation.
- azureml-sdk[automl,notebooks,explain] - azureml-sdk[automl,notebooks,explain]
- pandas_ml - pandas_ml

View File

@@ -1,6 +1,7 @@
@echo off @echo off
set conda_env_name=%1 set conda_env_name=%1
set automl_env_file=%2 set automl_env_file=%2
set options=%3
set PIP_NO_WARN_SCRIPT_LOCATION=0 set PIP_NO_WARN_SCRIPT_LOCATION=0
IF "%conda_env_name%"=="" SET conda_env_name="azure_automl" IF "%conda_env_name%"=="" SET conda_env_name="azure_automl"
@@ -23,15 +24,21 @@ if errorlevel 1 goto ErrorExit
call python -m ipykernel install --user --name %conda_env_name% --display-name "Python (%conda_env_name%)" call python -m ipykernel install --user --name %conda_env_name% --display-name "Python (%conda_env_name%)"
REM azureml.widgets is now installed as part of the pip install under the conda env.
REM Removing the old user install so that the notebooks will use the latest widget.
call jupyter nbextension uninstall --user --py azureml.widgets
echo. echo.
echo. echo.
echo *************************************** echo ***************************************
echo * AutoML setup completed successfully * echo * AutoML setup completed successfully *
echo *************************************** echo ***************************************
echo. IF NOT "%options%"=="nolaunch" (
echo Starting jupyter notebook - please run the configuration notebook echo.
echo. echo Starting jupyter notebook - please run the configuration notebook
jupyter notebook --log-level=50 --notebook-dir='..\..' echo.
jupyter notebook --log-level=50 --notebook-dir='..\..'
)
goto End goto End

View File

@@ -2,6 +2,7 @@
CONDA_ENV_NAME=$1 CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2 AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0 PIP_NO_WARN_SCRIPT_LOCATION=0
if [ "$CONDA_ENV_NAME" == "" ] if [ "$CONDA_ENV_NAME" == "" ]
@@ -22,20 +23,25 @@ fi
if source activate $CONDA_ENV_NAME 2> /dev/null if source activate $CONDA_ENV_NAME 2> /dev/null
then then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain] pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
jupyter nbextension uninstall --user --py azureml.widgets
else else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME && conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME && source activate $CONDA_ENV_NAME &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" && python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension uninstall --user --py azureml.widgets &&
echo "" && echo "" &&
echo "" && echo "" &&
echo "***************************************" && echo "***************************************" &&
echo "* AutoML setup completed successfully *" && echo "* AutoML setup completed successfully *" &&
echo "***************************************" && echo "***************************************" &&
echo "" && if [ "$OPTIONS" != "nolaunch" ]
echo "Starting jupyter notebook - please run the configuration notebook" && then
echo "" && echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..' echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
fi fi
if [ $? -gt 0 ] if [ $? -gt 0 ]

View File

@@ -2,6 +2,7 @@
CONDA_ENV_NAME=$1 CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2 AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0 PIP_NO_WARN_SCRIPT_LOCATION=0
if [ "$CONDA_ENV_NAME" == "" ] if [ "$CONDA_ENV_NAME" == "" ]
@@ -22,22 +23,27 @@ fi
if source activate $CONDA_ENV_NAME 2> /dev/null if source activate $CONDA_ENV_NAME 2> /dev/null
then then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain] pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
jupyter nbextension uninstall --user --py azureml.widgets
else else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME && conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME && source activate $CONDA_ENV_NAME &&
conda install lightgbm -c conda-forge -y && conda install lightgbm -c conda-forge -y &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" && python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
pip install numpy==1.15.3 jupyter nbextension uninstall --user --py azureml.widgets &&
pip install numpy==1.15.3 &&
echo "" && echo "" &&
echo "" && echo "" &&
echo "***************************************" && echo "***************************************" &&
echo "* AutoML setup completed successfully *" && echo "* AutoML setup completed successfully *" &&
echo "***************************************" && echo "***************************************" &&
echo "" && if [ "$OPTIONS" != "nolaunch" ]
echo "Starting jupyter notebook - please run the configuration notebook" && then
echo "" && echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..' echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
fi fi
if [ $? -gt 0 ] if [ $? -gt 0 ]

View File

@@ -1,403 +1,381 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Classification using whitelist models**_\n", "_**Classification using whitelist models**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Data](#Data)\n", "1. [Data](#Data)\n",
"1. [Train](#Train)\n", "1. [Train](#Train)\n",
"1. [Results](#Results)\n", "1. [Results](#Results)\n",
"1. [Test](#Test)" "1. [Test](#Test)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"\n", "\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n", "In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n", "\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.\n", "This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.\n",
"This trains the model exclusively on tensorflow based models.\n", "This trains the model exclusively on tensorflow based models.\n",
"\n", "\n",
"In this notebook you will learn how to:\n", "In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n", "1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n", "2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model on a whilelisted models using local compute. \n", "3. Train the model on a whilelisted models using local compute. \n",
"4. Explore the results.\n", "4. Explore the results.\n",
"5. Test the best fitted model." "5. Test the best fitted model."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup\n",
"\n", "\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "import logging\n",
"import os\n", "\n",
"import random\n", "from matplotlib import pyplot as plt\n",
"\n", "import numpy as np\n",
"from matplotlib import pyplot as plt\n", "import pandas as pd\n",
"from matplotlib.pyplot import imshow\n", "from sklearn import datasets\n",
"import numpy as np\n", "\n",
"import pandas as pd\n", "import azureml.core\n",
"from sklearn import datasets\n", "from azureml.core.experiment import Experiment\n",
"\n", "from azureml.core.workspace import Workspace\n",
"import azureml.core\n", "from azureml.train.automl import AutoMLConfig"
"from azureml.core.experiment import Experiment\n", ]
"from azureml.core.workspace import Workspace\n", },
"from azureml.train.automl import AutoMLConfig\n", {
"from azureml.train.automl.run import AutoMLRun" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "ws = Workspace.from_config()\n",
"metadata": {}, "\n",
"outputs": [], "# Choose a name for the experiment and specify the project folder.\n",
"source": [ "experiment_name = 'automl-local-whitelist'\n",
"ws = Workspace.from_config()\n", "project_folder = './sample_projects/automl-local-whitelist'\n",
"\n", "\n",
"# Choose a name for the experiment and specify the project folder.\n", "experiment = Experiment(ws, experiment_name)\n",
"experiment_name = 'automl-local-whitelist'\n", "\n",
"project_folder = './sample_projects/automl-local-whitelist'\n", "output = {}\n",
"\n", "output['SDK version'] = azureml.core.VERSION\n",
"experiment = Experiment(ws, experiment_name)\n", "output['Subscription ID'] = ws.subscription_id\n",
"\n", "output['Workspace Name'] = ws.name\n",
"output = {}\n", "output['Resource Group'] = ws.resource_group\n",
"output['SDK version'] = azureml.core.VERSION\n", "output['Location'] = ws.location\n",
"output['Subscription ID'] = ws.subscription_id\n", "output['Project Directory'] = project_folder\n",
"output['Workspace Name'] = ws.name\n", "output['Experiment Name'] = experiment.name\n",
"output['Resource Group'] = ws.resource_group\n", "pd.set_option('display.max_colwidth', -1)\n",
"output['Location'] = ws.location\n", "outputDf = pd.DataFrame(data = output, index = [''])\n",
"output['Project Directory'] = project_folder\n", "outputDf.T"
"output['Experiment Name'] = experiment.name\n", ]
"pd.set_option('display.max_colwidth', -1)\n", },
"pd.DataFrame(data = output, index = ['']).T" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## Data\n",
"metadata": {}, "\n",
"source": [ "This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
"Opt-in diagnostics for better experience, quality, and security of future releases." ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "digits = datasets.load_digits()\n",
"from azureml.telemetry import set_diagnostics_collection\n", "\n",
"set_diagnostics_collection(send_diagnostics = True)" "# Exclude the first 100 rows from training so that they can be used for test.\n",
] "X_train = digits.data[100:,:]\n",
}, "y_train = digits.target[100:]"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "markdown",
"## Data\n", "metadata": {},
"\n", "source": [
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method." "## Train\n",
] "\n",
}, "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
{ "\n",
"cell_type": "code", "|Property|Description|\n",
"execution_count": null, "|-|-|\n",
"metadata": {}, "|**task**|classification or regression|\n",
"outputs": [], "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"source": [ "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"from sklearn import datasets\n", "|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"\n", "|**n_cross_validations**|Number of cross validation splits.|\n",
"digits = datasets.load_digits()\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"\n", "|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n", "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"X_train = digits.data[100:,:]\n", "|**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#configure-your-experiment-settings).|"
"y_train = digits.target[100:]" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "markdown", "execution_count": null,
"metadata": {}, "metadata": {},
"source": [ "outputs": [],
"## Train\n", "source": [
"\n", "automl_config = AutoMLConfig(task = 'classification',\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", " debug_log = 'automl_errors.log',\n",
"\n", " primary_metric = 'AUC_weighted',\n",
"|Property|Description|\n", " iteration_timeout_minutes = 60,\n",
"|-|-|\n", " iterations = 10,\n",
"|**task**|classification or regression|\n", " n_cross_validations = 3,\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n", " verbosity = logging.INFO,\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", " X = X_train, \n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n", " y = y_train,\n",
"|**n_cross_validations**|Number of cross validation splits.|\n", " enable_tf=True,\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", " whitelist_models=[\"TensorFlowLinearClassifier\", \"TensorFlowDNN\"],\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n", " path = project_folder)"
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n", ]
"|**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#configure-your-experiment-settings).|" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "code", "source": [
"execution_count": null, "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"metadata": {}, "In this example, we specify `show_output = True` to print currently running iterations to the console."
"outputs": [], ]
"source": [ },
"automl_config = AutoMLConfig(task = 'classification',\n", {
" debug_log = 'automl_errors.log',\n", "cell_type": "code",
" primary_metric = 'AUC_weighted',\n", "execution_count": null,
" iteration_timeout_minutes = 60,\n", "metadata": {},
" iterations = 10,\n", "outputs": [],
" n_cross_validations = 3,\n", "source": [
" verbosity = logging.INFO,\n", "local_run = experiment.submit(automl_config, show_output = True)"
" X = X_train, \n", ]
" y = y_train,\n", },
" enable_tf=True,\n", {
" whitelist_models=[\"TensorFlowLinearClassifier\", \"TensorFlowDNN\"],\n", "cell_type": "code",
" path = project_folder)" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "markdown", "local_run"
"metadata": {}, ]
"source": [ },
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", {
"In this example, we specify `show_output = True` to print currently running iterations to the console." "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Results"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "markdown",
"source": [ "metadata": {},
"local_run = experiment.submit(automl_config, show_output = True)" "source": [
] "#### Widget for Monitoring Runs\n",
}, "\n",
{ "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"cell_type": "code", "\n",
"execution_count": null, "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
"metadata": {}, ]
"outputs": [], },
"source": [ {
"local_run" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "markdown", "source": [
"metadata": {}, "from azureml.widgets import RunDetails\n",
"source": [ "RunDetails(local_run).show() "
"## Results" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "\n",
"#### Widget for Monitoring Runs\n", "#### Retrieve All Child Runs\n",
"\n", "You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", ]
"\n", },
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "children = list(local_run.get_children())\n",
"outputs": [], "metricslist = {}\n",
"source": [ "for run in children:\n",
"from azureml.widgets import RunDetails\n", " properties = run.get_properties()\n",
"RunDetails(local_run).show() " " metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
] " metricslist[int(properties['iteration'])] = metrics\n",
}, "\n",
{ "rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"cell_type": "markdown", "rundata"
"metadata": {}, ]
"source": [ },
"\n", {
"#### Retrieve All Child Runs\n", "cell_type": "markdown",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log." "metadata": {},
] "source": [
}, "### Retrieve the Best Model\n",
{ "\n",
"cell_type": "code", "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "code",
"children = list(local_run.get_children())\n", "execution_count": null,
"metricslist = {}\n", "metadata": {},
"for run in children:\n", "outputs": [],
" properties = run.get_properties()\n", "source": [
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n", "best_run, fitted_model = local_run.get_output()\n",
" metricslist[int(properties['iteration'])] = metrics\n", "print(best_run)\n",
"\n", "print(fitted_model)"
"rundata = pd.DataFrame(metricslist).sort_index(1)\n", ]
"rundata" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "#### Best Model Based on Any Other Metric\n",
"source": [ "Show the run and the model that has the smallest `log_loss` value:"
"### Retrieve the Best Model\n", ]
"\n", },
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "lookup_metric = \"log_loss\"\n",
"outputs": [], "best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"source": [ "print(best_run)\n",
"best_run, fitted_model = local_run.get_output()\n", "print(fitted_model)"
"print(best_run)\n", ]
"print(fitted_model)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "#### Model from a Specific Iteration\n",
"source": [ "Show the run and the model from the third iteration:"
"#### Best Model Based on Any Other Metric\n", ]
"Show the run and the model that has the smallest `log_loss` value:" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "iteration = 3\n",
"source": [ "third_run, third_model = local_run.get_output(iteration = iteration)\n",
"lookup_metric = \"log_loss\"\n", "print(third_run)\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n", "print(third_model)"
"print(best_run)\n", ]
"print(fitted_model)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## Test\n",
"source": [ "\n",
"#### Model from a Specific Iteration\n", "#### Load Test Data"
"Show the run and the model from the third iteration:" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "digits = datasets.load_digits()\n",
"iteration = 3\n", "X_test = digits.data[:10, :]\n",
"third_run, third_model = local_run.get_output(iteration = iteration)\n", "y_test = digits.target[:10]\n",
"print(third_run)\n", "images = digits.images[:10]"
"print(third_model)" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "#### Testing Our Best Fitted Model\n",
"## Test\n", "We will try to predict 2 digits and see how our model works."
"\n", ]
"#### Load Test Data" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "# Randomly select digits and test.\n",
"source": [ "for index in np.random.choice(len(y_test), 2, replace = False):\n",
"digits = datasets.load_digits()\n", " print(index)\n",
"X_test = digits.data[:10, :]\n", " predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
"y_test = digits.target[:10]\n", " label = y_test[index]\n",
"images = digits.images[:10]" " title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
] " fig = plt.figure(1, figsize = (3,3))\n",
}, " ax1 = fig.add_axes((0,0,.8,.8))\n",
{ " ax1.set_title(title)\n",
"cell_type": "markdown", " plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
"metadata": {}, " plt.show()"
"source": [ ]
"#### Testing Our Best Fitted Model\n", }
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,418 +1,396 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Classification with Local Compute**_\n", "_**Classification with Local Compute**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Data](#Data)\n", "1. [Data](#Data)\n",
"1. [Train](#Train)\n", "1. [Train](#Train)\n",
"1. [Results](#Results)\n", "1. [Results](#Results)\n",
"1. [Test](#Test)\n", "1. [Test](#Test)\n",
"\n" "\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"\n", "\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n", "In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n", "\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you will learn how to:\n", "In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n", "1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n", "2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n", "3. Train the model using local compute.\n",
"4. Explore the results.\n", "4. Explore the results.\n",
"5. Test the best fitted model." "5. Test the best fitted model."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup\n",
"\n", "\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "import logging\n",
"import os\n", "\n",
"import random\n", "from matplotlib import pyplot as plt\n",
"\n", "import numpy as np\n",
"from matplotlib import pyplot as plt\n", "import pandas as pd\n",
"from matplotlib.pyplot import imshow\n", "from sklearn import datasets\n",
"import numpy as np\n", "\n",
"import pandas as pd\n", "import azureml.core\n",
"from sklearn import datasets\n", "from azureml.core.experiment import Experiment\n",
"\n", "from azureml.core.workspace import Workspace\n",
"import azureml.core\n", "from azureml.train.automl import AutoMLConfig"
"from azureml.core.experiment import Experiment\n", ]
"from azureml.core.workspace import Workspace\n", },
"from azureml.train.automl import AutoMLConfig\n", {
"from azureml.train.automl.run import AutoMLRun" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "ws = Workspace.from_config()\n",
"metadata": {}, "\n",
"outputs": [], "# Choose a name for the experiment and specify the project folder.\n",
"source": [ "experiment_name = 'automl-classification'\n",
"ws = Workspace.from_config()\n", "project_folder = './sample_projects/automl-classification'\n",
"\n", "\n",
"# Choose a name for the experiment and specify the project folder.\n", "experiment = Experiment(ws, experiment_name)\n",
"experiment_name = 'automl-local-classification'\n", "\n",
"project_folder = './sample_projects/automl-local-classification'\n", "output = {}\n",
"\n", "output['SDK version'] = azureml.core.VERSION\n",
"experiment = Experiment(ws, experiment_name)\n", "output['Subscription ID'] = ws.subscription_id\n",
"\n", "output['Workspace Name'] = ws.name\n",
"output = {}\n", "output['Resource Group'] = ws.resource_group\n",
"output['SDK version'] = azureml.core.VERSION\n", "output['Location'] = ws.location\n",
"output['Subscription ID'] = ws.subscription_id\n", "output['Project Directory'] = project_folder\n",
"output['Workspace Name'] = ws.name\n", "output['Experiment Name'] = experiment.name\n",
"output['Resource Group'] = ws.resource_group\n", "pd.set_option('display.max_colwidth', -1)\n",
"output['Location'] = ws.location\n", "outputDf = pd.DataFrame(data = output, index = [''])\n",
"output['Project Directory'] = project_folder\n", "outputDf.T"
"output['Experiment Name'] = experiment.name\n", ]
"pd.set_option('display.max_colwidth', -1)\n", },
"pd.DataFrame(data = output, index = ['']).T" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## Data\n",
"metadata": {}, "\n",
"source": [ "This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
"Opt-in diagnostics for better experience, quality, and security of future releases." ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "digits = datasets.load_digits()\n",
"from azureml.telemetry import set_diagnostics_collection\n", "\n",
"set_diagnostics_collection(send_diagnostics = True)" "# Exclude the first 100 rows from training so that they can be used for test.\n",
] "X_train = digits.data[100:,:]\n",
}, "y_train = digits.target[100:]"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "markdown",
"## Data\n", "metadata": {},
"\n", "source": [
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method." "## Train\n",
] "\n",
}, "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
{ "\n",
"cell_type": "code", "|Property|Description|\n",
"execution_count": null, "|-|-|\n",
"metadata": {}, "|**task**|classification or regression|\n",
"outputs": [], "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"source": [ "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"from sklearn import datasets\n", "|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"\n", "|**n_cross_validations**|Number of cross validation splits.|\n",
"digits = datasets.load_digits()\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"\n", "|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n", "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
"X_train = digits.data[100:,:]\n", ]
"y_train = digits.target[100:]" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "markdown", "metadata": {},
"metadata": {}, "outputs": [],
"source": [ "source": [
"## Train\n", "automl_config = AutoMLConfig(task = 'classification',\n",
"\n", " debug_log = 'automl_errors.log',\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", " primary_metric = 'AUC_weighted',\n",
"\n", " iteration_timeout_minutes = 60,\n",
"|Property|Description|\n", " iterations = 25,\n",
"|-|-|\n", " n_cross_validations = 3,\n",
"|**task**|classification or regression|\n", " verbosity = logging.INFO,\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n", " X = X_train, \n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", " y = y_train,\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n", " path = project_folder)"
"|**n_cross_validations**|Number of cross validation splits.|\n", ]
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", },
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n", {
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"cell_type": "code", "In this example, we specify `show_output = True` to print currently running iterations to the console."
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "code",
"automl_config = AutoMLConfig(task = 'classification',\n", "execution_count": null,
" debug_log = 'automl_errors.log',\n", "metadata": {},
" primary_metric = 'AUC_weighted',\n", "outputs": [],
" iteration_timeout_minutes = 60,\n", "source": [
" iterations = 25,\n", "local_run = experiment.submit(automl_config, show_output = True)"
" n_cross_validations = 3,\n", ]
" verbosity = logging.INFO,\n", },
" X = X_train, \n", {
" y = y_train,\n", "cell_type": "code",
" path = project_folder)" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "markdown", "local_run"
"metadata": {}, ]
"source": [ },
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", {
"In this example, we specify `show_output = True` to print currently running iterations to the console." "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "Optionally, you can continue an interrupted local run by calling `continue_experiment` without the `iterations` parameter, or run more iterations for a completed run by specifying the `iterations` parameter:"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"local_run = experiment.submit(automl_config, show_output = True)" "metadata": {},
] "outputs": [],
}, "source": [
{ "local_run = local_run.continue_experiment(X = X_train, \n",
"cell_type": "code", " y = y_train, \n",
"execution_count": null, " show_output = True,\n",
"metadata": {}, " iterations = 5)"
"outputs": [], ]
"source": [ },
"local_run" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## Results"
"metadata": {}, ]
"source": [ },
"Optionally, you can continue an interrupted local run by calling `continue_experiment` without the `iterations` parameter, or run more iterations for a completed run by specifying the `iterations` parameter:" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "code", "#### Widget for Monitoring Runs\n",
"execution_count": null, "\n",
"metadata": {}, "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"outputs": [], "\n",
"source": [ "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
"local_run = local_run.continue_experiment(X = X_train, \n", ]
" y = y_train, \n", },
" show_output = True,\n", {
" iterations = 5)" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "markdown", "source": [
"metadata": {}, "from azureml.widgets import RunDetails\n",
"source": [ "RunDetails(local_run).show() "
"## Results" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "\n",
"#### Widget for Monitoring Runs\n", "#### Retrieve All Child Runs\n",
"\n", "You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", ]
"\n", },
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "children = list(local_run.get_children())\n",
"outputs": [], "metricslist = {}\n",
"source": [ "for run in children:\n",
"from azureml.widgets import RunDetails\n", " properties = run.get_properties()\n",
"RunDetails(local_run).show() " " metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
] " metricslist[int(properties['iteration'])] = metrics\n",
}, "\n",
{ "rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"cell_type": "markdown", "rundata"
"metadata": {}, ]
"source": [ },
"\n", {
"#### Retrieve All Child Runs\n", "cell_type": "markdown",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log." "metadata": {},
] "source": [
}, "### Retrieve the Best Model\n",
{ "\n",
"cell_type": "code", "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "code",
"children = list(local_run.get_children())\n", "execution_count": null,
"metricslist = {}\n", "metadata": {},
"for run in children:\n", "outputs": [],
" properties = run.get_properties()\n", "source": [
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n", "best_run, fitted_model = local_run.get_output()\n",
" metricslist[int(properties['iteration'])] = metrics\n", "print(best_run)\n",
"\n", "print(fitted_model)"
"rundata = pd.DataFrame(metricslist).sort_index(1)\n", ]
"rundata" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "#### Best Model Based on Any Other Metric\n",
"source": [ "Show the run and the model that has the smallest `log_loss` value:"
"### Retrieve the Best Model\n", ]
"\n", },
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "lookup_metric = \"log_loss\"\n",
"outputs": [], "best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"source": [ "print(best_run)\n",
"best_run, fitted_model = local_run.get_output()\n", "print(fitted_model)"
"print(best_run)\n", ]
"print(fitted_model)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "#### Model from a Specific Iteration\n",
"source": [ "Show the run and the model from the third iteration:"
"#### Best Model Based on Any Other Metric\n", ]
"Show the run and the model that has the smallest `log_loss` value:" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "iteration = 3\n",
"source": [ "third_run, third_model = local_run.get_output(iteration = iteration)\n",
"lookup_metric = \"log_loss\"\n", "print(third_run)\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n", "print(third_model)"
"print(best_run)\n", ]
"print(fitted_model)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## Test \n",
"source": [ "\n",
"#### Model from a Specific Iteration\n", "#### Load Test Data"
"Show the run and the model from the third iteration:" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "digits = datasets.load_digits()\n",
"iteration = 3\n", "X_test = digits.data[:10, :]\n",
"third_run, third_model = local_run.get_output(iteration = iteration)\n", "y_test = digits.target[:10]\n",
"print(third_run)\n", "images = digits.images[:10]"
"print(third_model)" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "#### Testing Our Best Fitted Model\n",
"## Test \n", "We will try to predict 2 digits and see how our model works."
"\n", ]
"#### Load Test Data" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "# Randomly select digits and test.\n",
"source": [ "for index in np.random.choice(len(y_test), 2, replace = False):\n",
"digits = datasets.load_digits()\n", " print(index)\n",
"X_test = digits.data[:10, :]\n", " predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
"y_test = digits.target[:10]\n", " label = y_test[index]\n",
"images = digits.images[:10]" " title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
] " fig = plt.figure(1, figsize = (3,3))\n",
}, " ax1 = fig.add_axes((0,0,.8,.8))\n",
{ " ax1.set_title(title)\n",
"cell_type": "markdown", " plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
"metadata": {}, " plt.show()"
"source": [ ]
"#### Testing Our Best Fitted Model\n", }
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,154 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning Configuration\n",
"\n",
"In this example you will create an Azure Machine Learning `Workspace` object and initialize your notebook directory to easily reload this object from a configuration file. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<subscription_id>\"\n",
"resource_group = \"myrg\"\n",
"workspace_name = \"myws\"\n",
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,469 +1,449 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Prepare Data using `azureml.dataprep` for Local Execution**_\n", "_**Prepare Data using `azureml.dataprep` for Local Execution**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Data](#Data)\n", "1. [Data](#Data)\n",
"1. [Train](#Train)\n", "1. [Train](#Train)\n",
"1. [Results](#Results)\n", "1. [Results](#Results)\n",
"1. [Test](#Test)" "1. [Test](#Test)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"In this example we showcase how you can use the `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone; full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).\n", "In this example we showcase how you can use the `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone; full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).\n",
"\n", "\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you will learn how to:\n", "In this notebook you will learn how to:\n",
"1. Define data loading and preparation steps in a `Dataflow` using `azureml.dataprep`.\n", "1. Define data loading and preparation steps in a `Dataflow` using `azureml.dataprep`.\n",
"2. Pass the `Dataflow` to AutoML for a local run.\n", "2. Pass the `Dataflow` to AutoML for a local run.\n",
"3. Pass the `Dataflow` to AutoML for a remote run." "3. Pass the `Dataflow` to AutoML for a remote run."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup\n",
"\n", "\n",
"Currently, Data Prep only supports __Ubuntu 16__ and __Red Hat Enterprise Linux 7__. We are working on supporting more linux distros." "Currently, Data Prep only supports __Ubuntu 16__ and __Red Hat Enterprise Linux 7__. We are working on supporting more linux distros."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Opt-in diagnostics for better experience, quality, and security of future releases." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.telemetry import set_diagnostics_collection\n", "import logging\n",
"set_diagnostics_collection(send_diagnostics = True)" "\n",
] "import pandas as pd\n",
}, "\n",
{ "import azureml.core\n",
"cell_type": "markdown", "from azureml.core.experiment import Experiment\n",
"metadata": {}, "from azureml.core.workspace import Workspace\n",
"source": [ "import azureml.dataprep as dprep\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "from azureml.train.automl import AutoMLConfig"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "ws = Workspace.from_config()\n",
"import os\n", " \n",
"\n", "# choose a name for experiment\n",
"import pandas as pd\n", "experiment_name = 'automl-dataprep-local'\n",
"\n", "# project folder\n",
"import azureml.core\n", "project_folder = './sample_projects/automl-dataprep-local'\n",
"from azureml.core.experiment import Experiment\n", " \n",
"from azureml.core.workspace import Workspace\n", "experiment = Experiment(ws, experiment_name)\n",
"import azureml.dataprep as dprep\n", " \n",
"from azureml.train.automl import AutoMLConfig" "output = {}\n",
] "output['SDK version'] = azureml.core.VERSION\n",
}, "output['Subscription ID'] = ws.subscription_id\n",
{ "output['Workspace Name'] = ws.name\n",
"cell_type": "code", "output['Resource Group'] = ws.resource_group\n",
"execution_count": null, "output['Location'] = ws.location\n",
"metadata": {}, "output['Project Directory'] = project_folder\n",
"outputs": [], "output['Experiment Name'] = experiment.name\n",
"source": [ "pd.set_option('display.max_colwidth', -1)\n",
"ws = Workspace.from_config()\n", "outputDf = pd.DataFrame(data = output, index = [''])\n",
" \n", "outputDf.T"
"# choose a name for experiment\n", ]
"experiment_name = 'automl-dataprep-local'\n", },
"# project folder\n", {
"project_folder = './sample_projects/automl-dataprep-local'\n", "cell_type": "markdown",
" \n", "metadata": {},
"experiment = Experiment(ws, experiment_name)\n", "source": [
" \n", "## Data"
"output = {}\n", ]
"output['SDK version'] = azureml.core.VERSION\n", },
"output['Subscription ID'] = ws.subscription_id\n", {
"output['Workspace Name'] = ws.name\n", "cell_type": "code",
"output['Resource Group'] = ws.resource_group\n", "execution_count": null,
"output['Location'] = ws.location\n", "metadata": {},
"output['Project Directory'] = project_folder\n", "outputs": [],
"output['Experiment Name'] = experiment.name\n", "source": [
"pd.set_option('display.max_colwidth', -1)\n", "# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
"pd.DataFrame(data = output, index = ['']).T" "# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
] "simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
}, "X = dprep.auto_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n",
{ "\n",
"cell_type": "markdown", "# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n",
"metadata": {}, "# and convert column types manually.\n",
"source": [ "# Here we read a comma delimited file and convert all columns to integers.\n",
"## Data" "y = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": null, "metadata": {},
"metadata": {}, "source": [
"outputs": [], "### Review the Data Preparation Result\n",
"source": [ "\n",
"# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n", "You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large datasets."
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n", ]
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n", },
"X = dprep.auto_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n", {
"\n", "cell_type": "code",
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n", "execution_count": null,
"# and convert column types manually.\n", "metadata": {},
"# Here we read a comma delimited file and convert all columns to integers.\n", "outputs": [],
"y = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))" "source": [
] "X.skip(1).head(5)"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"### Review the Data Preparation Result\n", "source": [
"\n", "## Train\n",
"You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large datasets." "\n",
] "This creates a general AutoML settings object applicable for both local and remote runs."
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": {}, "execution_count": null,
"outputs": [], "metadata": {},
"source": [ "outputs": [],
"X.skip(1).head(5)" "source": [
] "automl_settings = {\n",
}, " \"iteration_timeout_minutes\" : 10,\n",
{ " \"iterations\" : 2,\n",
"cell_type": "markdown", " \"primary_metric\" : 'AUC_weighted',\n",
"metadata": {}, " \"preprocess\" : False,\n",
"source": [ " \"verbosity\" : logging.INFO,\n",
"## Train\n", " \"n_cross_validations\": 3\n",
"\n", "}"
"This creates a general AutoML settings object applicable for both local and remote runs." ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "code", "metadata": {},
"execution_count": null, "source": [
"metadata": {}, "### Pass Data with `Dataflow` Objects\n",
"outputs": [], "\n",
"source": [ "The `Dataflow` objects captured above can be passed to the `submit` method for a local run. AutoML will retrieve the results from the `Dataflow` for model training."
"automl_settings = {\n", ]
" \"iteration_timeout_minutes\" : 10,\n", },
" \"iterations\" : 2,\n", {
" \"primary_metric\" : 'AUC_weighted',\n", "cell_type": "code",
" \"preprocess\" : False,\n", "execution_count": null,
" \"verbosity\" : logging.INFO,\n", "metadata": {},
" \"n_cross_validations\": 3\n", "outputs": [],
"}" "source": [
] "automl_config = AutoMLConfig(task = 'classification',\n",
}, " debug_log = 'automl_errors.log',\n",
{ " X = X,\n",
"cell_type": "markdown", " y = y,\n",
"metadata": {}, " **automl_settings)"
"source": [ ]
"### Pass Data with `Dataflow` Objects\n", },
"\n", {
"The `Dataflow` objects captured above can be passed to the `submit` method for a local run. AutoML will retrieve the results from the `Dataflow` for model training." "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "local_run = experiment.submit(automl_config, show_output = True)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"automl_config = AutoMLConfig(task = 'classification',\n", "cell_type": "code",
" debug_log = 'automl_errors.log',\n", "execution_count": null,
" X = X,\n", "metadata": {},
" y = y,\n", "outputs": [],
" **automl_settings)" "source": [
] "local_run"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "markdown",
"metadata": {}, "metadata": {},
"outputs": [], "source": [
"source": [ "## Results"
"local_run = experiment.submit(automl_config, show_output = True)" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "code", "metadata": {},
"execution_count": null, "source": [
"metadata": {}, "#### Widget for Monitoring Runs\n",
"outputs": [], "\n",
"source": [ "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"local_run" "\n",
] "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "code",
"source": [ "execution_count": null,
"## Results" "metadata": {},
] "outputs": [],
}, "source": [
{ "from azureml.widgets import RunDetails\n",
"cell_type": "markdown", "RunDetails(local_run).show()"
"metadata": {}, ]
"source": [ },
"#### Widget for Monitoring Runs\n", {
"\n", "cell_type": "markdown",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", "metadata": {},
"\n", "source": [
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details." "#### Retrieve All Child Runs\n",
] "You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": {}, "execution_count": null,
"outputs": [], "metadata": {},
"source": [ "outputs": [],
"from azureml.widgets import RunDetails\n", "source": [
"RunDetails(local_run).show()" "children = list(local_run.get_children())\n",
] "metricslist = {}\n",
}, "for run in children:\n",
{ " properties = run.get_properties()\n",
"cell_type": "markdown", " metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
"metadata": {}, " metricslist[int(properties['iteration'])] = metrics\n",
"source": [ " \n",
"#### Retrieve All Child Runs\n", "rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log." "rundata"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": null, "metadata": {},
"metadata": {}, "source": [
"outputs": [], "### Retrieve the Best Model\n",
"source": [ "\n",
"children = list(local_run.get_children())\n", "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
"metricslist = {}\n", ]
"for run in children:\n", },
" properties = run.get_properties()\n", {
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n", "cell_type": "code",
" metricslist[int(properties['iteration'])] = metrics\n", "execution_count": null,
" \n", "metadata": {},
"import pandas as pd\n", "outputs": [],
"rundata = pd.DataFrame(metricslist).sort_index(1)\n", "source": [
"rundata" "best_run, fitted_model = local_run.get_output()\n",
] "print(best_run)\n",
}, "print(fitted_model)"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "markdown",
"### Retrieve the Best Model\n", "metadata": {},
"\n", "source": [
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." "#### Best Model Based on Any Other Metric\n",
] "Show the run and the model that has the smallest `log_loss` value:"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": {}, "execution_count": null,
"outputs": [], "metadata": {},
"source": [ "outputs": [],
"best_run, fitted_model = local_run.get_output()\n", "source": [
"print(best_run)\n", "lookup_metric = \"log_loss\"\n",
"print(fitted_model)" "best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
] "print(best_run)\n",
}, "print(fitted_model)"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "markdown",
"#### Best Model Based on Any Other Metric\n", "metadata": {},
"Show the run and the model that has the smallest `log_loss` value:" "source": [
] "#### Model from a Specific Iteration\n",
}, "Show the run and the model from the first iteration:"
{ ]
"cell_type": "code", },
"execution_count": null, {
"metadata": {}, "cell_type": "code",
"outputs": [], "execution_count": null,
"source": [ "metadata": {},
"lookup_metric = \"log_loss\"\n", "outputs": [],
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n", "source": [
"print(best_run)\n", "iteration = 0\n",
"print(fitted_model)" "best_run, fitted_model = local_run.get_output(iteration = iteration)\n",
] "print(best_run)\n",
}, "print(fitted_model)"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "markdown",
"#### Model from a Specific Iteration\n", "metadata": {},
"Show the run and the model from the first iteration:" "source": [
] "## Test\n",
}, "\n",
{ "#### Load Test Data"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"iteration = 0\n", "metadata": {},
"best_run, fitted_model = local_run.get_output(iteration = iteration)\n", "outputs": [],
"print(best_run)\n", "source": [
"print(fitted_model)" "from sklearn import datasets\n",
] "\n",
}, "digits = datasets.load_digits()\n",
{ "X_test = digits.data[:10, :]\n",
"cell_type": "markdown", "y_test = digits.target[:10]\n",
"metadata": {}, "images = digits.images[:10]"
"source": [ ]
"## Test\n", },
"\n", {
"#### Load Test Data" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "#### Testing Our Best Fitted Model\n",
"cell_type": "code", "We will try to predict 2 digits and see how our model works."
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "code",
"from sklearn import datasets\n", "execution_count": null,
"\n", "metadata": {},
"digits = datasets.load_digits()\n", "outputs": [],
"X_test = digits.data[:10, :]\n", "source": [
"y_test = digits.target[:10]\n", "#Randomly select digits and test\n",
"images = digits.images[:10]" "from matplotlib import pyplot as plt\n",
] "import numpy as np\n",
}, "\n",
{ "for index in np.random.choice(len(y_test), 2, replace = False):\n",
"cell_type": "markdown", " print(index)\n",
"metadata": {}, " predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
"source": [ " label = y_test[index]\n",
"#### Testing Our Best Fitted Model\n", " title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
"We will try to predict 2 digits and see how our model works." " fig = plt.figure(1, figsize=(3,3))\n",
] " ax1 = fig.add_axes((0,0,.8,.8))\n",
}, " ax1.set_title(title)\n",
{ " plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
"cell_type": "code", " plt.show()"
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"#Randomly select digits and test\n", "metadata": {},
"from matplotlib import pyplot as plt\n", "source": [
"from matplotlib.pyplot import imshow\n", "## Appendix"
"import random\n", ]
"import numpy as np\n", },
"\n", {
"for index in np.random.choice(len(y_test), 2, replace = False):\n", "cell_type": "markdown",
" print(index)\n", "metadata": {},
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n", "source": [
" label = y_test[index]\n", "### Capture the `Dataflow` Objects for Later Use in AutoML\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n", "\n",
" fig = plt.figure(1, figsize=(3,3))\n", "`Dataflow` objects are immutable and are composed of a list of data preparation steps. A `Dataflow` object can be branched at any point for further usage."
" ax1 = fig.add_axes((0,0,.8,.8))\n", ]
" ax1.set_title(title)\n", },
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n", {
" plt.show()" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "markdown", "source": [
"metadata": {}, "# sklearn.digits.data + target\n",
"source": [ "digits_complete = dprep.auto_read_file('https://dprepdata.blob.core.windows.net/automl-notebook-data/digits-complete.csv')"
"## Appendix" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "`digits_complete` (sourced from `sklearn.datasets.load_digits()`) is forked into `dflow_X` to capture all the feature columns and `dflow_y` to capture the label column."
"### Capture the `Dataflow` Objects for Later Use in AutoML\n", ]
"\n", },
"`Dataflow` objects are immutable and are composed of a list of data preparation steps. A `Dataflow` object can be branched at any point for further usage." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "print(digits_complete.to_pandas_dataframe().shape)\n",
"outputs": [], "labels_column = 'Column64'\n",
"source": [ "dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
"# sklearn.digits.data + target\n", "dflow_y = digits_complete.keep_columns(columns = [labels_column])"
"digits_complete = dprep.auto_read_file('https://dprepdata.blob.core.windows.net/automl-notebook-data/digits-complete.csv')" ]
] }
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`digits_complete` (sourced from `sklearn.datasets.load_digits()`) is forked into `dflow_X` to capture all the feature columns and `dflow_y` to capture the label column."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits_complete.to_pandas_dataframe().shape\n",
"labels_column = 'Column64'\n",
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,370 +1,342 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Exploring Previous Runs**_\n", "_**Exploring Previous Runs**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Explore](#Explore)\n", "1. [Explore](#Explore)\n",
"1. [Download](#Download)\n", "1. [Download](#Download)\n",
"1. [Register](#Register)" "1. [Register](#Register)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run.\n", "In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run.\n",
"\n", "\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you will learn how to:\n", "In this notebook you will learn how to:\n",
"1. List all experiments in a workspace.\n", "1. List all experiments in a workspace.\n",
"2. List all AutoML runs in an experiment.\n", "2. List all AutoML runs in an experiment.\n",
"3. Get details for an AutoML run, including settings, run widget, and all metrics.\n", "3. Get details for an AutoML run, including settings, run widget, and all metrics.\n",
"4. Download a fitted pipeline for any iteration." "4. Download a fitted pipeline for any iteration."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup" "## Setup"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "import pandas as pd\n",
"import os\n", "import json\n",
"import random\n", "\n",
"import re\n", "from azureml.core.experiment import Experiment\n",
"\n", "from azureml.core.workspace import Workspace\n",
"from matplotlib import pyplot as plt\n", "from azureml.train.automl.run import AutoMLRun"
"from matplotlib.pyplot import imshow\n", ]
"import numpy as np\n", },
"import pandas as pd\n", {
"from sklearn import datasets\n", "cell_type": "code",
"\n", "execution_count": null,
"import azureml.core\n", "metadata": {},
"from azureml.core.experiment import Experiment\n", "outputs": [],
"from azureml.core.run import Run\n", "source": [
"from azureml.core.workspace import Workspace\n", "ws = Workspace.from_config()"
"from azureml.train.automl import AutoMLConfig\n", ]
"from azureml.train.automl.run import AutoMLRun" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "code", "source": [
"execution_count": null, "## Explore"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"ws = Workspace.from_config()" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "### List Experiments"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"Opt-in diagnostics for better experience, quality, and security of future releases." "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "experiment_list = Experiment.list(workspace=ws)\n",
"metadata": {}, "\n",
"outputs": [], "summary_df = pd.DataFrame(index = ['No of Runs'])\n",
"source": [ "for experiment in experiment_list:\n",
"from azureml.telemetry import set_diagnostics_collection\n", " automl_runs = list(experiment.get_runs(type='automl'))\n",
"set_diagnostics_collection(send_diagnostics = True)" " summary_df[experiment.name] = [len(automl_runs)]\n",
] " \n",
}, "pd.set_option('display.max_colwidth', -1)\n",
{ "summary_df.T"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"## Explore" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "### List runs for an experiment\n",
"cell_type": "markdown", "Set `experiment_name` to any experiment name from the result of the Experiment.list cell to load the AutoML runs."
"metadata": {}, ]
"source": [ },
"### List Experiments" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "experiment_name = 'automl-local-classification' # Replace this with any project name from previous cell.\n",
"outputs": [], "\n",
"source": [ "proj = ws.experiments[experiment_name]\n",
"experiment_list = Experiment.list(workspace=ws)\n", "summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name'])\n",
"\n", "automl_runs = list(proj.get_runs(type='automl'))\n",
"summary_df = pd.DataFrame(index = ['No of Runs'])\n", "automl_runs_project = []\n",
"for experiment in experiment_list:\n", "for run in automl_runs:\n",
" automl_runs = list(experiment.get_runs(type='automl'))\n", " properties = run.get_properties()\n",
" summary_df[experiment.name] = [len(automl_runs)]\n", " tags = run.get_tags()\n",
" \n", " amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
"pd.set_option('display.max_colwidth', -1)\n", " if 'iterations' in tags:\n",
"summary_df.T" " iterations = tags['iterations']\n",
] " else:\n",
}, " iterations = properties['num_iterations']\n",
{ " summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']]\n",
"cell_type": "markdown", " if run.get_details()['status'] == 'Completed':\n",
"metadata": {}, " automl_runs_project.append(run.id)\n",
"source": [ " \n",
"### List runs for an experiment\n", "from IPython.display import HTML\n",
"Set `experiment_name` to any experiment name from the result of the Experiment.list cell to load the AutoML runs." "projname_html = HTML(\"<h3>{}</h3>\".format(proj.name))\n",
] "\n",
}, "from IPython.display import display\n",
{ "display(projname_html)\n",
"cell_type": "code", "display(summary_df.T)"
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"experiment_name = 'automl-local-classification' # Replace this with any project name from previous cell.\n", "metadata": {},
"\n", "source": [
"proj = ws.experiments[experiment_name]\n", "### Get details for a run\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name'])\n", "\n",
"automl_runs = list(proj.get_runs(type='automl'))\n", "Copy the project name and run id from the previous cell output to find more details on a particular run."
"automl_runs_project = []\n", ]
"for run in automl_runs:\n", },
" properties = run.get_properties()\n", {
" tags = run.get_tags()\n", "cell_type": "code",
" amlsettings = eval(properties['RawAMLSettingsString'])\n", "execution_count": null,
" if 'iterations' in tags:\n", "metadata": {},
" iterations = tags['iterations']\n", "outputs": [],
" else:\n", "source": [
" iterations = properties['num_iterations']\n", "run_id = automl_runs_project[0] # Replace with your own run_id from above run ids\n",
" summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']]\n", "assert (run_id in summary_df.keys()), \"Run id not found! Please set run id to a value from above run ids\"\n",
" if run.get_details()['status'] == 'Completed':\n", "\n",
" automl_runs_project.append(run.id)\n", "from azureml.widgets import RunDetails\n",
" \n", "\n",
"from IPython.display import HTML\n", "experiment = Experiment(ws, experiment_name)\n",
"projname_html = HTML(\"<h3>{}</h3>\".format(proj.name))\n", "ml_run = AutoMLRun(experiment = experiment, run_id = run_id)\n",
"\n", "\n",
"from IPython.display import display\n", "summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name', 'Start Time', 'End Time'])\n",
"display(projname_html)\n", "properties = ml_run.get_properties()\n",
"display(summary_df.T)" "tags = ml_run.get_tags()\n",
] "status = ml_run.get_details()\n",
}, "amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
{ "if 'iterations' in tags:\n",
"cell_type": "markdown", " iterations = tags['iterations']\n",
"metadata": {}, "else:\n",
"source": [ " iterations = properties['num_iterations']\n",
"### Get details for a run\n", "start_time = None\n",
"\n", "if 'startTimeUtc' in status:\n",
"Copy the project name and run id from the previous cell output to find more details on a particular run." " start_time = status['startTimeUtc']\n",
] "end_time = None\n",
}, "if 'endTimeUtc' in status:\n",
{ " end_time = status['endTimeUtc']\n",
"cell_type": "code", "summary_df[ml_run.id] = [amlsettings['task_type'], status['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name'], start_time, end_time]\n",
"execution_count": null, "display(HTML('<h3>Runtime Details</h3>'))\n",
"metadata": {}, "display(summary_df)\n",
"outputs": [], "\n",
"source": [ "#settings_df = pd.DataFrame(data = amlsettings, index = [''])\n",
"run_id = automl_runs_project[0] # Replace with your own run_id from above run ids\n", "display(HTML('<h3>AutoML Settings</h3>'))\n",
"assert (run_id in summary_df.keys()), \"Run id not found! Please set run id to a value from above run ids\"\n", "display(amlsettings)\n",
"\n", "\n",
"from azureml.widgets import RunDetails\n", "display(HTML('<h3>Iterations</h3>'))\n",
"\n", "RunDetails(ml_run).show() \n",
"experiment = Experiment(ws, experiment_name)\n", "\n",
"ml_run = AutoMLRun(experiment = experiment, run_id = run_id)\n", "children = list(ml_run.get_children())\n",
"\n", "metricslist = {}\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name', 'Start Time', 'End Time'])\n", "for run in children:\n",
"properties = ml_run.get_properties()\n", " properties = run.get_properties()\n",
"tags = ml_run.get_tags()\n", " metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
"status = ml_run.get_details()\n", " metricslist[int(properties['iteration'])] = metrics\n",
"amlsettings = eval(properties['RawAMLSettingsString'])\n", "\n",
"if 'iterations' in tags:\n", "rundata = pd.DataFrame(metricslist).sort_index(1)\n",
" iterations = tags['iterations']\n", "display(HTML('<h3>Metrics</h3>'))\n",
"else:\n", "display(rundata)\n"
" iterations = properties['num_iterations']\n", ]
"start_time = None\n", },
"if 'startTimeUtc' in status:\n", {
" start_time = status['startTimeUtc']\n", "cell_type": "markdown",
"end_time = None\n", "metadata": {},
"if 'endTimeUtc' in status:\n", "source": [
" end_time = status['endTimeUtc']\n", "## Download"
"summary_df[ml_run.id] = [amlsettings['task_type'], status['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name'], start_time, end_time]\n", ]
"display(HTML('<h3>Runtime Details</h3>'))\n", },
"display(summary_df)\n", {
"\n", "cell_type": "markdown",
"#settings_df = pd.DataFrame(data = amlsettings, index = [''])\n", "metadata": {},
"display(HTML('<h3>AutoML Settings</h3>'))\n", "source": [
"display(amlsettings)\n", "### Download the Best Model for Any Given Metric"
"\n", ]
"display(HTML('<h3>Iterations</h3>'))\n", },
"RunDetails(ml_run).show() \n", {
"\n", "cell_type": "code",
"children = list(ml_run.get_children())\n", "execution_count": null,
"metricslist = {}\n", "metadata": {},
"for run in children:\n", "outputs": [],
" properties = run.get_properties()\n", "source": [
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n", "metric = 'AUC_weighted' # Replace with a metric name.\n",
" metricslist[int(properties['iteration'])] = metrics\n", "best_run, fitted_model = ml_run.get_output(metric = metric)\n",
"\n", "fitted_model"
"rundata = pd.DataFrame(metricslist).sort_index(1)\n", ]
"display(HTML('<h3>Metrics</h3>'))\n", },
"display(rundata)\n" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "### Download the Model for Any Given Iteration"
"metadata": {}, ]
"source": [ },
"## Download" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "markdown", "outputs": [],
"metadata": {}, "source": [
"source": [ "iteration = 1 # Replace with an iteration number.\n",
"### Download the Best Model for Any Given Metric" "best_run, fitted_model = ml_run.get_output(iteration = iteration)\n",
] "fitted_model"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "markdown",
"metadata": {}, "metadata": {},
"outputs": [], "source": [
"source": [ "## Register"
"metric = 'AUC_weighted' # Replace with a metric name.\n", ]
"best_run, fitted_model = ml_run.get_output(metric = metric)\n", },
"fitted_model" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "### Register fitted model for deployment\n",
"metadata": {}, "If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered."
"source": [ ]
"### Download the Model for Any Given Iteration" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "description = 'AutoML Model'\n",
"source": [ "tags = None\n",
"iteration = 1 # Replace with an iteration number.\n", "ml_run.register_model(description = description, tags = tags)\n",
"best_run, fitted_model = ml_run.get_output(iteration = iteration)\n", "print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
"fitted_model" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "### Register the Best Model for Any Given Metric"
"## Register" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "markdown", "execution_count": null,
"metadata": {}, "metadata": {},
"source": [ "outputs": [],
"### Register fitted model for deployment\n", "source": [
"If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered." "metric = 'AUC_weighted' # Replace with a metric name.\n",
] "description = 'AutoML Model'\n",
}, "tags = None\n",
{ "ml_run.register_model(description = description, tags = tags, metric = metric)\n",
"cell_type": "code", "print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"description = 'AutoML Model'\n", "metadata": {},
"tags = None\n", "source": [
"ml_run.register_model(description = description, tags = tags)\n", "### Register the Model for Any Given Iteration"
"ml_run.model_id # Use this id to deploy the model as a web service in Azure." ]
] },
}, {
{ "cell_type": "code",
"cell_type": "markdown", "execution_count": null,
"metadata": {}, "metadata": {},
"source": [ "outputs": [],
"### Register the Best Model for Any Given Metric" "source": [
] "iteration = 1 # Replace with an iteration number.\n",
}, "description = 'AutoML Model'\n",
{ "tags = None\n",
"cell_type": "code", "ml_run.register_model(description = description, tags = tags, iteration = iteration)\n",
"execution_count": null, "print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
"metadata": {}, ]
"outputs": [], }
"source": [
"metric = 'AUC_weighted' # Replace with a metric name.\n",
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description = description, tags = tags, metric = metric)\n",
"print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register the Model for Any Given Iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 1 # Replace with an iteration number.\n",
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description = description, tags = tags, iteration = iteration)\n",
"print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,418 +1,381 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Energy Demand Forecasting**_\n", "_**Energy Demand Forecasting**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Data](#Data)\n", "1. [Data](#Data)\n",
"1. [Train](#Train)" "1. [Train](#Train)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"In this example, we show how AutoML can be used for energy demand forecasting.\n", "In this example, we show how AutoML can be used for energy demand forecasting.\n",
"\n", "\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you would see\n", "In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n", "1. Creating an Experiment in an existing Workspace\n",
"2. Instantiating AutoMLConfig with new task type \"forecasting\" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: \"time_column_name\" \n", "2. Instantiating AutoMLConfig with new task type \"forecasting\" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: \"time_column_name\" \n",
"3. Training the Model using local compute\n", "3. Training the Model using local compute\n",
"4. Exploring the results\n", "4. Exploring the results\n",
"5. Testing the fitted model" "5. Testing the fitted model"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup\n"
"\n", ]
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "import azureml.core\n",
"source": [ "import pandas as pd\n",
"import azureml.core\n", "import numpy as np\n",
"import pandas as pd\n", "import logging\n",
"import numpy as np\n", "import warnings\n",
"import os\n", "# Squash warning messages for cleaner output in the notebook\n",
"import logging\n", "warnings.showwarning = lambda *args, **kwargs: None\n",
"import warnings\n", "\n",
"# Squash warning messages for cleaner output in the notebook\n", "\n",
"warnings.showwarning = lambda *args, **kwargs: None\n", "from azureml.core.workspace import Workspace\n",
"\n", "from azureml.core.experiment import Experiment\n",
"\n", "from azureml.train.automl import AutoMLConfig\n",
"from azureml.core.workspace import Workspace\n", "from matplotlib import pyplot as plt\n",
"from azureml.core.experiment import Experiment\n", "from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score"
"from azureml.train.automl import AutoMLConfig\n", ]
"from azureml.train.automl.run import AutoMLRun\n", },
"from matplotlib import pyplot as plt\n", {
"from matplotlib.pyplot import imshow\n", "cell_type": "markdown",
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score" "metadata": {},
] "source": [
}, "As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
{ ]
"cell_type": "code", },
"execution_count": null, {
"metadata": {}, "cell_type": "code",
"outputs": [], "execution_count": null,
"source": [ "metadata": {},
"ws = Workspace.from_config()\n", "outputs": [],
"\n", "source": [
"# choose a name for the run history container in the workspace\n", "ws = Workspace.from_config()\n",
"experiment_name = 'automl-energydemandforecasting'\n", "\n",
"# project folder\n", "# choose a name for the run history container in the workspace\n",
"project_folder = './sample_projects/automl-local-energydemandforecasting'\n", "experiment_name = 'automl-energydemandforecasting'\n",
"\n", "# project folder\n",
"experiment = Experiment(ws, experiment_name)\n", "project_folder = './sample_projects/automl-local-energydemandforecasting'\n",
"\n", "\n",
"output = {}\n", "experiment = Experiment(ws, experiment_name)\n",
"output['SDK version'] = azureml.core.VERSION\n", "\n",
"output['Subscription ID'] = ws.subscription_id\n", "output = {}\n",
"output['Workspace'] = ws.name\n", "output['SDK version'] = azureml.core.VERSION\n",
"output['Resource Group'] = ws.resource_group\n", "output['Subscription ID'] = ws.subscription_id\n",
"output['Location'] = ws.location\n", "output['Workspace'] = ws.name\n",
"output['Project Directory'] = project_folder\n", "output['Resource Group'] = ws.resource_group\n",
"output['Run History Name'] = experiment_name\n", "output['Location'] = ws.location\n",
"pd.set_option('display.max_colwidth', -1)\n", "output['Project Directory'] = project_folder\n",
"pd.DataFrame(data=output, index=['']).T" "output['Run History Name'] = experiment_name\n",
] "pd.set_option('display.max_colwidth', -1)\n",
}, "outputDf = pd.DataFrame(data = output, index = [''])\n",
{ "outputDf.T"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"## Data\n", "cell_type": "markdown",
"Read energy demanding data from file, and preview data." "metadata": {},
] "source": [
}, "## Data\n",
{ "Read energy demanding data from file, and preview data."
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"data = pd.read_csv(\"nyc_energy.csv\", parse_dates=['timeStamp'])\n", "metadata": {},
"data.head()" "outputs": [],
] "source": [
}, "data = pd.read_csv(\"nyc_energy.csv\", parse_dates=['timeStamp'])\n",
{ "data.head()"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Split the data to train and test\n", "cell_type": "markdown",
"\n" "metadata": {},
] "source": [
}, "### Split the data to train and test\n",
{ "\n"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"train = data[data['timeStamp'] < '2017-02-01']\n", "metadata": {},
"test = data[data['timeStamp'] >= '2017-02-01']\n" "outputs": [],
] "source": [
}, "train = data[data['timeStamp'] < '2017-02-01']\n",
{ "test = data[data['timeStamp'] >= '2017-02-01']\n"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Prepare the test data, we will feed X_test to the fitted model and get prediction" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "### Prepare the test data, we will feed X_test to the fitted model and get prediction"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"y_test = test.pop('demand').values\n", "metadata": {},
"X_test = test" "outputs": [],
] "source": [
}, "y_test = test.pop('demand').values\n",
{ "X_test = test"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Split the train data to train and valid\n", "cell_type": "markdown",
"\n", "metadata": {},
"Use one month's data as valid data\n" "source": [
] "### Split the train data to train and valid\n",
}, "\n",
{ "Use one month's data as valid data\n"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"X_train = train[train['timeStamp'] < '2017-01-01']\n", "metadata": {},
"X_valid = train[train['timeStamp'] >= '2017-01-01']\n", "outputs": [],
"y_train = X_train.pop('demand').values\n", "source": [
"y_valid = X_valid.pop('demand').values\n", "X_train = train[train['timeStamp'] < '2017-01-01']\n",
"print(X_train.shape)\n", "X_valid = train[train['timeStamp'] >= '2017-01-01']\n",
"print(y_train.shape)\n", "y_train = X_train.pop('demand').values\n",
"print(X_valid.shape)\n", "y_valid = X_valid.pop('demand').values\n",
"print(y_valid.shape)" "print(X_train.shape)\n",
] "print(y_train.shape)\n",
}, "print(X_valid.shape)\n",
{ "print(y_valid.shape)"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"## Train\n", "cell_type": "markdown",
"\n", "metadata": {},
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", "source": [
"\n", "## Train\n",
"|Property|Description|\n", "\n",
"|-|-|\n", "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"|**task**|forecasting|\n", "\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n", "|Property|Description|\n",
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n", "|-|-|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", "|**task**|forecasting|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", "|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n", "|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
"|**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]|\n", "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. " "|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
] "|**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]|\n",
}, "|**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
{ "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"time_column_name = 'timeStamp'\n", "metadata": {},
"automl_settings = {\n", "outputs": [],
" \"time_column_name\": time_column_name,\n", "source": [
"}\n", "time_column_name = 'timeStamp'\n",
"\n", "automl_settings = {\n",
"\n", " \"time_column_name\": time_column_name,\n",
"automl_config = AutoMLConfig(task = 'forecasting',\n", "}\n",
" debug_log = 'automl_nyc_energy_errors.log',\n", "\n",
" primary_metric='normalized_root_mean_squared_error',\n", "\n",
" iterations = 10,\n", "automl_config = AutoMLConfig(task = 'forecasting',\n",
" iteration_timeout_minutes = 5,\n", " debug_log = 'automl_nyc_energy_errors.log',\n",
" X = X_train,\n", " primary_metric='normalized_root_mean_squared_error',\n",
" y = y_train,\n", " iterations = 10,\n",
" X_valid = X_valid,\n", " iteration_timeout_minutes = 5,\n",
" y_valid = y_valid,\n", " X = X_train,\n",
" path=project_folder,\n", " y = y_train,\n",
" verbosity = logging.INFO,\n", " X_valid = X_valid,\n",
" **automl_settings)" " y_valid = y_valid,\n",
] " path=project_folder,\n",
}, " verbosity = logging.INFO,\n",
{ " **automl_settings)"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n", "cell_type": "markdown",
"You will see the currently running iterations printing to the console." "metadata": {},
] "source": [
}, "You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
{ "You will see the currently running iterations printing to the console."
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"local_run = experiment.submit(automl_config, show_output=True)" "metadata": {},
] "outputs": [],
}, "source": [
{ "local_run = experiment.submit(automl_config, show_output=True)"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"local_run" "metadata": {},
] "outputs": [],
}, "source": [
{ "local_run"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Retrieve the Best Model\n", "cell_type": "markdown",
"Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration." "metadata": {},
] "source": [
}, "### Retrieve the Best Model\n",
{ "Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration."
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"best_run, fitted_model = local_run.get_output()\n", "metadata": {},
"fitted_model.steps" "outputs": [],
] "source": [
}, "best_run, fitted_model = local_run.get_output()\n",
{ "fitted_model.steps"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Test the Best Fitted Model\n", "cell_type": "markdown",
"\n", "metadata": {},
"Predict on training and test set, and calculate residual values." "source": [
] "### Test the Best Fitted Model\n",
}, "\n",
{ "Predict on training and test set, and calculate residual values."
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"y_pred = fitted_model.predict(X_test)\n", "metadata": {},
"y_pred" "outputs": [],
] "source": [
}, "y_pred = fitted_model.predict(X_test)\n",
{ "y_pred"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Define a Check Data Function\n", "cell_type": "markdown",
"\n", "metadata": {},
"Remove the nan values from y_test to avoid error when calculate metrics " "source": [
] "### Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics "
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": {}, "execution_count": null,
"outputs": [], "metadata": {},
"source": [ "outputs": [],
"def _check_calc_input(y_true, y_pred, rm_na=True):\n", "source": [
" \"\"\"\n", "if len(y_test) != len(y_pred):\n",
" Check that 'y_true' and 'y_pred' are non-empty and\n", " raise ValueError(\n",
" have equal length.\n", " 'the true values and prediction values do not have equal length.')\n",
"\n", "elif len(y_test) == 0:\n",
" :param y_true: Vector of actual values\n", " raise ValueError(\n",
" :type y_true: array-like\n", " 'y_true and y_pred are empty.')\n",
"\n", "\n",
" :param y_pred: Vector of predicted values\n", "# if there is any non-numeric element in the y_true or y_pred,\n",
" :type y_pred: array-like\n", "# the ValueError exception will be thrown.\n",
"\n", "y_test_f = np.array(y_test).astype(float)\n",
" :param rm_na:\n", "y_pred_f = np.array(y_pred).astype(float)\n",
" If rm_na=True, remove entries where y_true=NA and y_pred=NA.\n", "\n",
" :type rm_na: boolean\n", "# remove entries both in y_true and y_pred where at least\n",
"\n", "# one element in y_true or y_pred is missing\n",
" :return:\n", "y_test = y_test_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))]\n",
" Tuple (y_true, y_pred). if rm_na=True,\n", "y_pred = y_pred_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))]"
" the returned vectors may differ from their input values.\n", ]
" :rtype: Tuple with 2 entries\n", },
" \"\"\"\n", {
" if len(y_true) != len(y_pred):\n", "cell_type": "markdown",
" raise ValueError(\n", "metadata": {},
" 'the true values and prediction values do not have equal length.')\n", "source": [
" elif len(y_true) == 0:\n", "### Calculate metrics for the prediction\n"
" raise ValueError(\n", ]
" 'y_true and y_pred are empty.')\n", },
" # if there is any non-numeric element in the y_true or y_pred,\n", {
" # the ValueError exception will be thrown.\n", "cell_type": "code",
" y_true = np.array(y_true).astype(float)\n", "execution_count": null,
" y_pred = np.array(y_pred).astype(float)\n", "metadata": {},
" if rm_na:\n", "outputs": [],
" # remove entries both in y_true and y_pred where at least\n", "source": [
" # one element in y_true or y_pred is missing\n", "print(\"[Test Data] \\nRoot Mean squared error: %.2f\" % np.sqrt(mean_squared_error(y_test, y_pred)))\n",
" y_true_rm_na = y_true[~(np.isnan(y_true) | np.isnan(y_pred))]\n", "# Explained variance score: 1 is perfect prediction\n",
" y_pred_rm_na = y_pred[~(np.isnan(y_true) | np.isnan(y_pred))]\n", "print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))\n",
" return (y_true_rm_na, y_pred_rm_na)\n", "print('R2 score: %.2f' % r2_score(y_test, y_pred))\n",
" else:\n", "\n",
" return y_true, y_pred" "\n",
] "\n",
}, "# Plot outputs\n",
{ "%matplotlib notebook\n",
"cell_type": "markdown", "test_pred = plt.scatter(y_test, y_pred, color='b')\n",
"metadata": {}, "test_test = plt.scatter(y_test, y_test, color='g')\n",
"source": [ "plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"### Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics " "plt.show()"
] ]
}, }
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_test,y_pred = _check_calc_input(y_test,y_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calculate metrics for the prediction\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"[Test Data] \\nRoot Mean squared error: %.2f\" % np.sqrt(mean_squared_error(y_test, y_pred)))\n",
"# Explained variance score: 1 is perfect prediction\n",
"print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))\n",
"print('R2 score: %.2f' % r2_score(y_test, y_pred))\n",
"\n",
"\n",
"\n",
"# Plot outputs\n",
"%matplotlib notebook\n",
"test_pred = plt.scatter(y_test, y_pred, color='b')\n",
"test_test = plt.scatter(y_test, y_test, color='g')\n",
"plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "xiaga"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "xiaga"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,413 +1,425 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Orange Juice Sales Forecasting**_\n", "_**Orange Juice Sales Forecasting**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Data](#Data)\n", "1. [Data](#Data)\n",
"1. [Train](#Train)" "1. [Train](#Train)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"In this example, we use AutoML to find and tune a time-series forecasting model.\n", "In this example, we use AutoML to find and tune a time-series forecasting model.\n",
"\n", "\n",
"Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook, you will:\n", "In this notebook, you will:\n",
"1. Create an Experiment in an existing Workspace\n", "1. Create an Experiment in an existing Workspace\n",
"2. Instantiate an AutoMLConfig \n", "2. Instantiate an AutoMLConfig \n",
"3. Find and train a forecasting model using local compute\n", "3. Find and train a forecasting model using local compute\n",
"4. Evaluate the performance of the model\n", "4. Evaluate the performance of the model\n",
"\n", "\n",
"The examples in the follow code samples use the [University of Chicago's Dominick's Finer Foods dataset](https://research.chicagobooth.edu/kilts/marketing-databases/dominicks) to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area." "The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup"
"\n", ]
"As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model. " },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "import azureml.core\n",
"source": [ "import pandas as pd\n",
"import azureml.core\n", "import numpy as np\n",
"import pandas as pd\n", "import logging\n",
"import numpy as np\n", "import warnings\n",
"import os\n", "# Squash warning messages for cleaner output in the notebook\n",
"import logging\n", "warnings.showwarning = lambda *args, **kwargs: None\n",
"import warnings\n", "\n",
"# Squash warning messages for cleaner output in the notebook\n", "\n",
"warnings.showwarning = lambda *args, **kwargs: None\n", "from azureml.core.workspace import Workspace\n",
"\n", "from azureml.core.experiment import Experiment\n",
"\n", "from azureml.train.automl import AutoMLConfig\n",
"from azureml.core.workspace import Workspace\n", "from sklearn.metrics import mean_absolute_error, mean_squared_error"
"from azureml.core.experiment import Experiment\n", ]
"from azureml.train.automl import AutoMLConfig\n", },
"from azureml.train.automl.run import AutoMLRun\n", {
"from sklearn.metrics import mean_absolute_error, mean_squared_error" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model. "
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"ws = Workspace.from_config()\n", "metadata": {},
"\n", "outputs": [],
"# choose a name for the run history container in the workspace\n", "source": [
"experiment_name = 'automl-ojsalesforecasting'\n", "ws = Workspace.from_config()\n",
"# project folder\n", "\n",
"project_folder = './sample_projects/automl-local-ojsalesforecasting'\n", "# choose a name for the run history container in the workspace\n",
"\n", "experiment_name = 'automl-ojsalesforecasting'\n",
"experiment = Experiment(ws, experiment_name)\n", "# project folder\n",
"\n", "project_folder = './sample_projects/automl-local-ojsalesforecasting'\n",
"output = {}\n", "\n",
"output['SDK version'] = azureml.core.VERSION\n", "experiment = Experiment(ws, experiment_name)\n",
"output['Subscription ID'] = ws.subscription_id\n", "\n",
"output['Workspace'] = ws.name\n", "output = {}\n",
"output['Resource Group'] = ws.resource_group\n", "output['SDK version'] = azureml.core.VERSION\n",
"output['Location'] = ws.location\n", "output['Subscription ID'] = ws.subscription_id\n",
"output['Project Directory'] = project_folder\n", "output['Workspace'] = ws.name\n",
"output['Run History Name'] = experiment_name\n", "output['Resource Group'] = ws.resource_group\n",
"pd.set_option('display.max_colwidth', -1)\n", "output['Location'] = ws.location\n",
"pd.DataFrame(data=output, index=['']).T" "output['Project Directory'] = project_folder\n",
] "output['Run History Name'] = experiment_name\n",
}, "pd.set_option('display.max_colwidth', -1)\n",
{ "outputDf = pd.DataFrame(data = output, index = [''])\n",
"cell_type": "markdown", "outputDf.T"
"metadata": {}, ]
"source": [ },
"## Data\n", {
"You are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type." "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Data\n",
"cell_type": "code", "You are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type."
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "code",
"time_column_name = 'WeekStarting'\n", "execution_count": null,
"data = pd.read_csv(\"dominicks_OJ.csv\", parse_dates=[time_column_name])\n", "metadata": {},
"data.head()" "outputs": [],
] "source": [
}, "time_column_name = 'WeekStarting'\n",
{ "data = pd.read_csv(\"dominicks_OJ.csv\", parse_dates=[time_column_name])\n",
"cell_type": "markdown", "data.head()"
"metadata": {}, ]
"source": [ },
"Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. \n", {
"\n", "cell_type": "markdown",
"The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series: " "metadata": {},
] "source": [
}, "Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. \n",
{ "\n",
"cell_type": "code", "The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series: "
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "code",
"grain_column_names = ['Store', 'Brand']\n", "execution_count": null,
"nseries = data.groupby(grain_column_names).ngroups\n", "metadata": {},
"print('Data contains {0} individual time-series.'.format(nseries))" "outputs": [],
] "source": [
}, "grain_column_names = ['Store', 'Brand']\n",
{ "nseries = data.groupby(grain_column_names).ngroups\n",
"cell_type": "markdown", "print('Data contains {0} individual time-series.'.format(nseries))"
"metadata": {}, ]
"source": [ },
"### Data Splitting\n", {
"For the purposes of demonstration and later forecast evaluation, we now split the data into a training and a testing set. The test set will contain the final 20 weeks of observed sales for each time-series." "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "For demonstration purposes, we extract sales time-series for just a few of the stores:"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"ntest_periods = 20\n", "metadata": {},
"\n", "outputs": [],
"def split_last_n_by_grain(df, n):\n", "source": [
" \"\"\"\n", "use_stores = [2, 5, 8]\n",
" Group df by grain and split on last n rows for each group\n", "data_subset = data[data.Store.isin(use_stores)]\n",
" \"\"\"\n", "nseries = data_subset.groupby(grain_column_names).ngroups\n",
" df_grouped = (df.sort_values(time_column_name) # Sort by ascending time\n", "print('Data subset contains {0} individual time-series.'.format(nseries))"
" .groupby(grain_column_names, group_keys=False))\n", ]
" df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])\n", },
" df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])\n", {
" return df_head, df_tail\n", "cell_type": "markdown",
"\n", "metadata": {},
"X_train, X_test = split_last_n_by_grain(data, ntest_periods)" "source": [
] "### Data Splitting\n",
}, "We now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns."
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "code",
"## Modeling\n", "execution_count": null,
"\n", "metadata": {},
"For forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:\n", "outputs": [],
"* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span \n", "source": [
"* Impute missing values in the target (via forward-fill) and feature columns (using median column values) \n", "n_test_periods = 20\n",
"* Create grain-based features to enable fixed effects across different series\n", "\n",
"* Create time-based features to assist in learning seasonal patterns\n", "def split_last_n_by_grain(df, n):\n",
"* Encode categorical variables to numeric quantities\n", " \"\"\"Group df by grain and split on last n rows for each group.\"\"\"\n",
"\n", " df_grouped = (df.sort_values(time_column_name) # Sort by ascending time\n",
"AutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.\n", " .groupby(grain_column_names, group_keys=False))\n",
"\n", " df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])\n",
"You are almost ready to start an AutoML training job. We will first need to create a validation set from the existing training set (i.e. for hyper-parameter tuning): " " df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])\n",
] " return df_head, df_tail\n",
}, "\n",
{ "X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "markdown",
"source": [ "metadata": {},
"nvalidation_periods = 20\n", "source": [
"X_train, X_validate = split_last_n_by_grain(X_train, nvalidation_periods)" "## Modeling\n",
] "\n",
}, "For forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:\n",
{ "* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span \n",
"cell_type": "markdown", "* Impute missing values in the target (via forward-fill) and feature columns (using median column values) \n",
"metadata": {}, "* Create grain-based features to enable fixed effects across different series\n",
"source": [ "* Create time-based features to assist in learning seasonal patterns\n",
"We also need to separate the target column from the rest of the DataFrame: " "* Encode categorical variables to numeric quantities\n",
] "\n",
}, "AutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.\n",
{ "\n",
"cell_type": "code", "You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame: "
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "code",
"target_column_name = 'Quantity'\n", "execution_count": null,
"y_train = X_train.pop(target_column_name).values\n", "metadata": {},
"y_validate = X_validate.pop(target_column_name).values " "outputs": [],
] "source": [
}, "target_column_name = 'Quantity'\n",
{ "y_train = X_train.pop(target_column_name).values"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"## Train\n", "cell_type": "markdown",
"\n", "metadata": {},
"The AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, and the training and validation data. \n", "source": [
"\n", "## Train\n",
"For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time and the grain column names. A time column is required for forecasting, while the grain is optional. If a grain is not given, the forecaster assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak. \n", "\n",
"\n", "The AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. \n",
"|Property|Description|\n", "\n",
"|-|-|\n", "For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.\n",
"|**task**|forecasting|\n", "\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n", "The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. \n",
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n", "\n",
"|**X**|Training matrix of features, shape = [n_training_samples, n_features]|\n", "Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.\n",
"|**y**|Target values, shape = [n_training_samples, ]|\n", "\n",
"|**X_valid**|Validation matrix of features, shape = [n_validation_samples, n_features]|\n", "Here is a summary of AutoMLConfig parameters used for training the OJ model:\n",
"|**y_valid**|Target values for validation, shape = [n_validation_samples, ]\n", "\n",
"|**enable_ensembling**|Allow AutoML to create ensembles of the best performing models\n", "|Property|Description|\n",
"|**debug_log**|Log file path for writing debugging information\n", "|-|-|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. " "|**task**|forecasting|\n",
] "|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
}, "|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
{ "|**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]|\n",
"cell_type": "code", "|**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]|\n",
"execution_count": null, "|**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection|\n",
"metadata": {}, "|**enable_ensembling**|Allow AutoML to create ensembles of the best performing models\n",
"outputs": [], "|**debug_log**|Log file path for writing debugging information\n",
"source": [ "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"automl_settings = {\n", "|**time_column_name**|Name of the datetime column in the input data|\n",
" 'time_column_name': time_column_name,\n", "|**grain_column_names**|Name(s) of the columns defining individual series in the input data|\n",
" 'grain_column_names': grain_column_names,\n", "|**drop_column_names**|Name(s) of columns to drop prior to modeling|\n",
" 'drop_column_names': ['logQuantity']\n", "|**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|"
"}\n", ]
"\n", },
"automl_config = AutoMLConfig(task='forecasting',\n", {
" debug_log='automl_oj_sales_errors.log',\n", "cell_type": "code",
" primary_metric='normalized_root_mean_squared_error',\n", "execution_count": null,
" iterations=10,\n", "metadata": {},
" X=X_train,\n", "outputs": [],
" y=y_train,\n", "source": [
" X_valid=X_validate,\n", "time_series_settings = {\n",
" y_valid=y_validate,\n", " 'time_column_name': time_column_name,\n",
" enable_ensembling=False,\n", " 'grain_column_names': grain_column_names,\n",
" path=project_folder,\n", " 'drop_column_names': ['logQuantity'],\n",
" verbosity=logging.INFO,\n", " 'max_horizon': n_test_periods\n",
" **automl_settings)" "}\n",
] "\n",
}, "automl_config = AutoMLConfig(task='forecasting',\n",
{ " debug_log='automl_oj_sales_errors.log',\n",
"cell_type": "markdown", " primary_metric='normalized_root_mean_squared_error',\n",
"metadata": {}, " iterations=10,\n",
"source": [ " X=X_train,\n",
"You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.\n", " y=y_train,\n",
"Information from each iteration will be printed to the console." " n_cross_validations=5,\n",
] " enable_ensembling=False,\n",
}, " path=project_folder,\n",
{ " verbosity=logging.INFO,\n",
"cell_type": "code", " **time_series_settings)"
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"local_run = experiment.submit(automl_config, show_output=True)" "metadata": {},
] "source": [
}, "You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.\n",
{ "Information from each iteration will be printed to the console."
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"local_run" "metadata": {},
] "outputs": [],
}, "source": [
{ "local_run = experiment.submit(automl_config, show_output=True)"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Retrieve the Best Model\n", "cell_type": "code",
"Each run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "local_run"
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"best_run, fitted_pipeline = local_run.get_output()\n", "metadata": {},
"fitted_pipeline.steps" "source": [
] "### Retrieve the Best Model\n",
}, "Each run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "code",
"### Make Predictions from the Best Fitted Model\n", "execution_count": null,
"Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:" "metadata": {},
] "outputs": [],
}, "source": [
{ "best_run, fitted_pipeline = local_run.get_output()\n",
"cell_type": "code", "fitted_pipeline.steps"
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"y_test = X_test.pop(target_column_name).values" "metadata": {},
] "source": [
}, "### Make Predictions from the Best Fitted Model\n",
{ "Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"X_test.head()" "metadata": {},
] "outputs": [],
}, "source": [
{ "y_test = X_test.pop(target_column_name).values"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. \n", "cell_type": "code",
"\n", "execution_count": null,
"The target predictions can be retrieved by calling the `predict` method on the best model:" "metadata": {},
] "outputs": [],
}, "source": [
{ "X_test.head()"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "markdown",
"source": [ "metadata": {},
"y_pred = fitted_pipeline.predict(X_test)" "source": [
] "To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. \n",
}, "\n",
{ "The target predictions can be retrieved by calling the `predict` method on the best model:"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Calculate evaluation metrics for the prediction\n", "cell_type": "code",
"To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE)." "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "y_pred = fitted_pipeline.predict(X_test)"
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"def MAPE(actual, pred):\n", "metadata": {},
" \"\"\"\n", "source": [
" Calculate mean absolute percentage error.\n", "### Calculate evaluation metrics for the prediction\n",
" Remove NA and values where actual is close to zero\n", "To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE)."
" \"\"\"\n", ]
" not_na = ~(np.isnan(actual) | np.isnan(pred))\n", },
" not_zero = ~np.isclose(actual, 0.0)\n", {
" actual_safe = actual[not_na & not_zero]\n", "cell_type": "code",
" pred_safe = pred[not_na & not_zero]\n", "execution_count": null,
" APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)\n", "metadata": {},
" return np.mean(APE)\n", "outputs": [],
"\n", "source": [
"print(\"[Test Data] \\nRoot Mean squared error: %.2f\" % np.sqrt(mean_squared_error(y_test, y_pred)))\n", "def MAPE(actual, pred):\n",
"print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))\n", " \"\"\"\n",
"print('MAPE: %.2f' % MAPE(y_test, y_pred))" " Calculate mean absolute percentage error.\n",
] " Remove NA and values where actual is close to zero\n",
} " \"\"\"\n",
], " not_na = ~(np.isnan(actual) | np.isnan(pred))\n",
"metadata": { " not_zero = ~np.isclose(actual, 0.0)\n",
"authors": [ " actual_safe = actual[not_na & not_zero]\n",
{ " pred_safe = pred[not_na & not_zero]\n",
"name": "erwright" " APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)\n",
} " return np.mean(APE)\n",
"\n",
"print(\"[Test Data] \\nRoot Mean squared error: %.2f\" % np.sqrt(mean_squared_error(y_test, y_pred)))\n",
"print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))\n",
"print('MAPE: %.2f' % MAPE(y_test, y_pred))"
]
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "erwright"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,401 +1,379 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Blacklisting Models, Early Termination, and Handling Missing Data**_\n", "_**Blacklisting Models, Early Termination, and Handling Missing Data**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Data](#Data)\n", "1. [Data](#Data)\n",
"1. [Train](#Train)\n", "1. [Train](#Train)\n",
"1. [Results](#Results)\n", "1. [Results](#Results)\n",
"1. [Test](#Test)\n" "1. [Test](#Test)\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for handling missing values in data. We also provide a stopping metric indicating a target for the primary metrics so that AutoML can terminate the run without necessarly going through all the iterations. Finally, if you want to avoid a certain pipeline, we allow you to specify a blacklist of algorithms that AutoML will ignore for this run.\n", "In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for handling missing values in data. We also provide a stopping metric indicating a target for the primary metrics so that AutoML can terminate the run without necessarly going through all the iterations. Finally, if you want to avoid a certain pipeline, we allow you to specify a blacklist of algorithms that AutoML will ignore for this run.\n",
"\n", "\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you will learn how to:\n", "In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n", "1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n", "2. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model.\n", "4. Train the model.\n",
"5. Explore the results.\n", "5. Explore the results.\n",
"6. Test the best fitted model.\n", "6. Test the best fitted model.\n",
"\n", "\n",
"In addition this notebook showcases the following features\n", "In addition this notebook showcases the following features\n",
"- **Blacklisting** certain pipelines\n", "- **Blacklisting** certain pipelines\n",
"- Specifying **target metrics** to indicate stopping criteria\n", "- Specifying **target metrics** to indicate stopping criteria\n",
"- Handling **missing data** in the input" "- Handling **missing data** in the input"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup\n",
"\n", "\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "import logging\n",
"import os\n", "\n",
"import random\n", "from matplotlib import pyplot as plt\n",
"\n", "import numpy as np\n",
"from matplotlib import pyplot as plt\n", "import pandas as pd\n",
"from matplotlib.pyplot import imshow\n", "from sklearn import datasets\n",
"import numpy as np\n", "\n",
"import pandas as pd\n", "import azureml.core\n",
"from sklearn import datasets\n", "from azureml.core.experiment import Experiment\n",
"\n", "from azureml.core.workspace import Workspace\n",
"import azureml.core\n", "from azureml.train.automl import AutoMLConfig"
"from azureml.core.experiment import Experiment\n", ]
"from azureml.core.workspace import Workspace\n", },
"from azureml.train.automl import AutoMLConfig\n", {
"from azureml.train.automl.run import AutoMLRun" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "ws = Workspace.from_config()\n",
"metadata": {}, "\n",
"outputs": [], "# Choose a name for the experiment.\n",
"source": [ "experiment_name = 'automl-local-missing-data'\n",
"ws = Workspace.from_config()\n", "project_folder = './sample_projects/automl-local-missing-data'\n",
"\n", "\n",
"# Choose a name for the experiment.\n", "experiment = Experiment(ws, experiment_name)\n",
"experiment_name = 'automl-local-missing-data'\n", "\n",
"project_folder = './sample_projects/automl-local-missing-data'\n", "output = {}\n",
"\n", "output['SDK version'] = azureml.core.VERSION\n",
"experiment = Experiment(ws, experiment_name)\n", "output['Subscription ID'] = ws.subscription_id\n",
"\n", "output['Workspace'] = ws.name\n",
"output = {}\n", "output['Resource Group'] = ws.resource_group\n",
"output['SDK version'] = azureml.core.VERSION\n", "output['Location'] = ws.location\n",
"output['Subscription ID'] = ws.subscription_id\n", "output['Project Directory'] = project_folder\n",
"output['Workspace'] = ws.name\n", "output['Experiment Name'] = experiment.name\n",
"output['Resource Group'] = ws.resource_group\n", "pd.set_option('display.max_colwidth', -1)\n",
"output['Location'] = ws.location\n", "outputDf = pd.DataFrame(data = output, index = [''])\n",
"output['Project Directory'] = project_folder\n", "outputDf.T"
"output['Experiment Name'] = experiment.name\n", ]
"pd.set_option('display.max_colwidth', -1)\n", },
"pd.DataFrame(data=output, index=['']).T" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## Data"
"metadata": {}, ]
"source": [ },
"Opt-in diagnostics for better experience, quality, and security of future releases." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "digits = datasets.load_digits()\n",
"outputs": [], "X_train = digits.data[10:,:]\n",
"source": [ "y_train = digits.target[10:]\n",
"from azureml.telemetry import set_diagnostics_collection\n", "\n",
"set_diagnostics_collection(send_diagnostics = True)" "# Add missing values in 75% of the lines.\n",
] "missing_rate = 0.75\n",
}, "n_missing_samples = int(np.floor(X_train.shape[0] * missing_rate))\n",
{ "missing_samples = np.hstack((np.zeros(X_train.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))\n",
"cell_type": "markdown", "rng = np.random.RandomState(0)\n",
"metadata": {}, "rng.shuffle(missing_samples)\n",
"source": [ "missing_features = rng.randint(0, X_train.shape[1], n_missing_samples)\n",
"## Data" "X_train[np.where(missing_samples)[0], missing_features] = np.nan"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from scipy import sparse\n", "df = pd.DataFrame(data = X_train)\n",
"\n", "df['Label'] = pd.Series(y_train, index=df.index)\n",
"digits = datasets.load_digits()\n", "df.head()"
"X_train = digits.data[10:,:]\n", ]
"y_train = digits.target[10:]\n", },
"\n", {
"# Add missing values in 75% of the lines.\n", "cell_type": "markdown",
"missing_rate = 0.75\n", "metadata": {},
"n_missing_samples = int(np.floor(X_train.shape[0] * missing_rate))\n", "source": [
"missing_samples = np.hstack((np.zeros(X_train.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))\n", "## Train\n",
"rng = np.random.RandomState(0)\n", "\n",
"rng.shuffle(missing_samples)\n", "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment. This includes setting `experiment_exit_score`, which should cause the run to complete before the `iterations` count is reached.\n",
"missing_features = rng.randint(0, X_train.shape[1], n_missing_samples)\n", "\n",
"X_train[np.where(missing_samples)[0], missing_features] = np.nan" "|Property|Description|\n",
] "|-|-|\n",
}, "|**task**|classification or regression|\n",
{ "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"cell_type": "code", "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"execution_count": null, "|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"metadata": {}, "|**n_cross_validations**|Number of cross validation splits.|\n",
"outputs": [], "|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.|\n",
"source": [ "|**experiment_exit_score**|*double* value indicating the target for *primary_metric*. <br>Once the target is surpassed the run terminates.|\n",
"df = pd.DataFrame(data = X_train)\n", "|**blacklist_models**|*List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGD</i><br><i>MultinomialNaiveBayes</i><br><i>BernoulliNaiveBayes</i><br><i>SVM</i><br><i>LinearSVM</i><br><i>KNN</i><br><i>DecisionTree</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>GradientBoosting</i><br><i>TensorFlowDNN</i><br><i>TensorFlowLinearClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i>|\n",
"df['Label'] = pd.Series(y_train, index=df.index)\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"df.head()" "|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
] "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "code",
"source": [ "execution_count": null,
"## Train\n", "metadata": {},
"\n", "outputs": [],
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment. This includes setting `experiment_exit_score`, which should cause the run to complete before the `iterations` count is reached.\n", "source": [
"\n", "automl_config = AutoMLConfig(task = 'classification',\n",
"|Property|Description|\n", " debug_log = 'automl_errors.log',\n",
"|-|-|\n", " primary_metric = 'AUC_weighted',\n",
"|**task**|classification or regression|\n", " iteration_timeout_minutes = 60,\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n", " iterations = 20,\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", " n_cross_validations = 5,\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n", " preprocess = True,\n",
"|**n_cross_validations**|Number of cross validation splits.|\n", " experiment_exit_score = 0.9984,\n",
"|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.|\n", " blacklist_models = ['KNN','LinearSVM'],\n",
"|**experiment_exit_score**|*double* value indicating the target for *primary_metric*. <br>Once the target is surpassed the run terminates.|\n", " verbosity = logging.INFO,\n",
"|**blacklist_models**|*List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGD</i><br><i>MultinomialNaiveBayes</i><br><i>BernoulliNaiveBayes</i><br><i>SVM</i><br><i>LinearSVM</i><br><i>KNN</i><br><i>DecisionTree</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>GradientBoosting</i><br><i>TensorFlowDNN</i><br><i>TensorFlowLinearClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i>|\n", " X = X_train, \n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", " y = y_train,\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n", " path = project_folder)"
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "code", "metadata": {},
"execution_count": null, "source": [
"metadata": {}, "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"outputs": [], "In this example, we specify `show_output = True` to print currently running iterations to the console."
"source": [ ]
"automl_config = AutoMLConfig(task = 'classification',\n", },
" debug_log = 'automl_errors.log',\n", {
" primary_metric = 'AUC_weighted',\n", "cell_type": "code",
" iteration_timeout_minutes = 60,\n", "execution_count": null,
" iterations = 20,\n", "metadata": {},
" n_cross_validations = 5,\n", "outputs": [],
" preprocess = True,\n", "source": [
" experiment_exit_score = 0.9984,\n", "local_run = experiment.submit(automl_config, show_output = True)"
" blacklist_models = ['KNN','LinearSVM'],\n", ]
" verbosity = logging.INFO,\n", },
" X = X_train, \n", {
" y = y_train,\n", "cell_type": "code",
" path = project_folder)" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "markdown", "local_run"
"metadata": {}, ]
"source": [ },
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", {
"In this example, we specify `show_output = True` to print currently running iterations to the console." "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Results"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "markdown",
"source": [ "metadata": {},
"local_run = experiment.submit(automl_config, show_output = True)" "source": [
] "#### Widget for Monitoring Runs\n",
}, "\n",
{ "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"cell_type": "code", "\n",
"execution_count": null, "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
"metadata": {}, ]
"outputs": [], },
"source": [ {
"local_run" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "markdown", "source": [
"metadata": {}, "from azureml.widgets import RunDetails\n",
"source": [ "RunDetails(local_run).show() "
"## Results" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "\n",
"#### Widget for Monitoring Runs\n", "#### Retrieve All Child Runs\n",
"\n", "You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", ]
"\n", },
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "children = list(local_run.get_children())\n",
"outputs": [], "metricslist = {}\n",
"source": [ "for run in children:\n",
"from azureml.widgets import RunDetails\n", " properties = run.get_properties()\n",
"RunDetails(local_run).show() " " metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
] " metricslist[int(properties['iteration'])] = metrics\n",
}, "\n",
{ "rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"cell_type": "markdown", "rundata"
"metadata": {}, ]
"source": [ },
"\n", {
"#### Retrieve All Child Runs\n", "cell_type": "markdown",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log." "metadata": {},
] "source": [
}, "### Retrieve the Best Model\n",
{ "\n",
"cell_type": "code", "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "code",
"children = list(local_run.get_children())\n", "execution_count": null,
"metricslist = {}\n", "metadata": {},
"for run in children:\n", "outputs": [],
" properties = run.get_properties()\n", "source": [
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n", "best_run, fitted_model = local_run.get_output()"
" metricslist[int(properties['iteration'])] = metrics\n", ]
"\n", },
"rundata = pd.DataFrame(metricslist).sort_index(1)\n", {
"rundata" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "#### Best Model Based on Any Other Metric\n",
"cell_type": "markdown", "Show the run and the model which has the smallest `accuracy` value:"
"metadata": {}, ]
"source": [ },
"### Retrieve the Best Model\n", {
"\n", "cell_type": "code",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "# lookup_metric = \"accuracy\"\n",
"execution_count": null, "# best_run, fitted_model = local_run.get_output(metric = lookup_metric)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"best_run, fitted_model = local_run.get_output()" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "#### Model from a Specific Iteration\n",
"cell_type": "markdown", "Show the run and the model from the third iteration:"
"metadata": {}, ]
"source": [ },
"#### Best Model Based on Any Other Metric\n", {
"Show the run and the model which has the smallest `accuracy` value:" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "# iteration = 3\n",
"metadata": {}, "# best_run, fitted_model = local_run.get_output(iteration = iteration)"
"outputs": [], ]
"source": [ },
"# lookup_metric = \"accuracy\"\n", {
"# best_run, fitted_model = local_run.get_output(metric = lookup_metric)" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Test"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"#### Model from a Specific Iteration\n", "cell_type": "code",
"Show the run and the model from the third iteration:" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "digits = datasets.load_digits()\n",
"execution_count": null, "X_test = digits.data[:10, :]\n",
"metadata": {}, "y_test = digits.target[:10]\n",
"outputs": [], "images = digits.images[:10]\n",
"source": [ "\n",
"# iteration = 3\n", "# Randomly select digits and test.\n",
"# best_run, fitted_model = local_run.get_output(iteration = iteration)" "for index in np.random.choice(len(y_test), 2, replace = False):\n",
] " print(index)\n",
}, " predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
{ " label = y_test[index]\n",
"cell_type": "markdown", " title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
"metadata": {}, " fig = plt.figure(1, figsize=(3,3))\n",
"source": [ " ax1 = fig.add_axes((0,0,.8,.8))\n",
"## Test" " ax1.set_title(title)\n",
] " plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
}, " plt.show()\n"
{ ]
"cell_type": "code", }
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]\n",
"\n",
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()\n"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,367 +1,348 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Explain classification model and visualize the explanation**_\n", "_**Explain classification model and visualize the explanation**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Data](#Data)\n", "1. [Data](#Data)\n",
"1. [Train](#Train)\n", "1. [Train](#Train)\n",
"1. [Results](#Results)" "1. [Results](#Results)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"In this example we use the sklearn's [iris dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) to showcase how you can use the AutoML Classifier for a simple classification problem.\n", "In this example we use the sklearn's [iris dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) to showcase how you can use the AutoML Classifier for a simple classification problem.\n",
"\n", "\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you would see\n", "In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n", "1. Creating an Experiment in an existing Workspace\n",
"2. Instantiating AutoMLConfig\n", "2. Instantiating AutoMLConfig\n",
"3. Training the Model using local compute and explain the model\n", "3. Training the Model using local compute and explain the model\n",
"4. Visualization model's feature importance in widget\n", "4. Visualization model's feature importance in widget\n",
"5. Explore best model's explanation" "5. Explore best model's explanation"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup\n",
"\n", "\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments." "As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "import logging\n",
"import os\n", "\n",
"import random\n", "import pandas as pd\n",
"\n", "import azureml.core\n",
"import pandas as pd\n", "from azureml.core.experiment import Experiment\n",
"import azureml.core\n", "from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n", "from azureml.train.automl import AutoMLConfig"
"from azureml.core.workspace import Workspace\n", ]
"from azureml.train.automl import AutoMLConfig\n", },
"from azureml.train.automl.run import AutoMLRun" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "ws = Workspace.from_config()\n",
"outputs": [], "\n",
"source": [ "# choose a name for experiment\n",
"ws = Workspace.from_config()\n", "experiment_name = 'automl-model-explanation'\n",
"\n", "# project folder\n",
"# choose a name for experiment\n", "project_folder = './sample_projects/automl-model-explanation'\n",
"experiment_name = 'automl-local-classification'\n", "\n",
"# project folder\n", "experiment=Experiment(ws, experiment_name)\n",
"project_folder = './sample_projects/automl-local-classification-model-explanation'\n", "\n",
"\n", "output = {}\n",
"experiment=Experiment(ws, experiment_name)\n", "output['SDK version'] = azureml.core.VERSION\n",
"\n", "output['Subscription ID'] = ws.subscription_id\n",
"output = {}\n", "output['Workspace Name'] = ws.name\n",
"output['SDK version'] = azureml.core.VERSION\n", "output['Resource Group'] = ws.resource_group\n",
"output['Subscription ID'] = ws.subscription_id\n", "output['Location'] = ws.location\n",
"output['Workspace Name'] = ws.name\n", "output['Project Directory'] = project_folder\n",
"output['Resource Group'] = ws.resource_group\n", "output['Experiment Name'] = experiment.name\n",
"output['Location'] = ws.location\n", "pd.set_option('display.max_colwidth', -1)\n",
"output['Project Directory'] = project_folder\n", "outputDf = pd.DataFrame(data = output, index = [''])\n",
"output['Experiment Name'] = experiment.name\n", "outputDf.T"
"pd.set_option('display.max_colwidth', -1)\n", ]
"pd.DataFrame(data = output, index = ['']).T" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## Data"
"source": [ ]
"Opt-in diagnostics for better experience, quality, and security of future releases" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from sklearn import datasets\n",
"source": [ "\n",
"from azureml.telemetry import set_diagnostics_collection\n", "iris = datasets.load_iris()\n",
"set_diagnostics_collection(send_diagnostics=True)" "y = iris.target\n",
] "X = iris.data\n",
}, "\n",
{ "features = iris.feature_names\n",
"cell_type": "markdown", "\n",
"metadata": {}, "from sklearn.model_selection import train_test_split\n",
"source": [ "X_train, X_test, y_train, y_test = train_test_split(X,\n",
"## Data" " y,\n",
] " test_size=0.1,\n",
}, " random_state=100,\n",
{ " stratify=y)\n",
"cell_type": "code", "\n",
"execution_count": null, "X_train = pd.DataFrame(X_train, columns=features)\n",
"metadata": {}, "X_test = pd.DataFrame(X_test, columns=features)"
"outputs": [], ]
"source": [ },
"from sklearn import datasets\n", {
"\n", "cell_type": "markdown",
"iris = datasets.load_iris()\n", "metadata": {},
"y = iris.target\n", "source": [
"X = iris.data\n", "## Train\n",
"\n", "\n",
"features = iris.feature_names\n", "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n", "\n",
"from sklearn.model_selection import train_test_split\n", "|Property|Description|\n",
"X_train, X_test, y_train, y_test = train_test_split(X,\n", "|-|-|\n",
" y,\n", "|**task**|classification or regression|\n",
" test_size=0.1,\n", "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
" random_state=100,\n", "|**max_time_sec**|Time limit in minutes for each iterations|\n",
" stratify=y)\n", "|**iterations**|Number of iterations. In each iteration Auto ML trains the data with a specific pipeline|\n",
"\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"X_train = pd.DataFrame(X_train, columns=features)\n", "|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"X_test = pd.DataFrame(X_test, columns=features)" "|**X_valid**|(sparse) array-like, shape = [n_samples, n_features]|\n",
] "|**y_valid**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]|\n",
}, "|**model_explainability**|Indicate to explain each trained pipeline or not |\n",
{ "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. |"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"## Train\n", "cell_type": "code",
"\n", "execution_count": null,
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", "metadata": {},
"\n", "outputs": [],
"|Property|Description|\n", "source": [
"|-|-|\n", "automl_config = AutoMLConfig(task = 'classification',\n",
"|**task**|classification or regression|\n", " debug_log = 'automl_errors.log',\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n", " primary_metric = 'AUC_weighted',\n",
"|**max_time_sec**|Time limit in minutes for each iterations|\n", " iteration_timeout_minutes = 200,\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains the data with a specific pipeline|\n", " iterations = 10,\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", " verbosity = logging.INFO,\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n", " X = X_train, \n",
"|**X_valid**|(sparse) array-like, shape = [n_samples, n_features]|\n", " y = y_train,\n",
"|**y_valid**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]|\n", " X_valid = X_test,\n",
"|**model_explainability**|Indicate to explain each trained pipeline or not |\n", " y_valid = y_test,\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. |" " model_explainability=True,\n",
] " path=project_folder)"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "markdown",
"metadata": {}, "metadata": {},
"outputs": [], "source": [
"source": [ "You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"automl_config = AutoMLConfig(task = 'classification',\n", "You will see the currently running iterations printing to the console."
" debug_log = 'automl_errors.log',\n", ]
" primary_metric = 'AUC_weighted',\n", },
" iteration_timeout_minutes = 200,\n", {
" iterations = 10,\n", "cell_type": "code",
" verbosity = logging.INFO,\n", "execution_count": null,
" X = X_train, \n", "metadata": {},
" y = y_train,\n", "outputs": [],
" X_valid = X_test,\n", "source": [
" y_valid = y_test,\n", "local_run = experiment.submit(automl_config, show_output=True)"
" model_explainability=True,\n", ]
" path=project_folder)" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "markdown", "metadata": {},
"metadata": {}, "outputs": [],
"source": [ "source": [
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n", "local_run"
"You will see the currently running iterations printing to the console." ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "code", "metadata": {},
"execution_count": null, "source": [
"metadata": {}, "## Results"
"outputs": [], ]
"source": [ },
"local_run = experiment.submit(automl_config, show_output=True)" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "code", "### Widget for monitoring runs\n",
"execution_count": null, "\n",
"metadata": {}, "The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
"outputs": [], "\n",
"source": [ "NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
"local_run" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "markdown", "execution_count": null,
"metadata": {}, "metadata": {},
"source": [ "outputs": [],
"## Results" "source": [
] "from azureml.widgets import RunDetails\n",
}, "RunDetails(local_run).show() "
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "markdown",
"### Widget for monitoring runs\n", "metadata": {},
"\n", "source": [
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n", "### Retrieve the Best Model\n",
"\n", "\n",
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details." "Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.widgets import RunDetails\n", "best_run, fitted_model = local_run.get_output()\n",
"RunDetails(local_run).show() " "print(best_run)\n",
] "print(fitted_model)"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"### Retrieve the Best Model\n", "source": [
"\n", "### Best Model 's explanation\n",
"Below we select the best pipeline from our iterations. The *get_output* method on automl_classifier returns the best run and the fitted model for the last *fit* invocation. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*." "\n",
] "Retrieve the explanation from the best_run. And explanation information includes:\n",
}, "\n",
{ "1.\tshap_values: The explanation information generated by shap lib\n",
"cell_type": "code", "2.\texpected_values: The expected value of the model applied to set of X_train data.\n",
"execution_count": null, "3.\toverall_summary: The model level feature importance values sorted in descending order\n",
"metadata": {}, "4.\toverall_imp: The feature names sorted in the same order as in overall_summary\n",
"outputs": [], "5.\tper_class_summary: The class level feature importance values sorted in descending order. Only available for the classification case\n",
"source": [ "6.\tper_class_imp: The feature names sorted in the same order as in per_class_summary. Only available for the classification case"
"best_run, fitted_model = local_run.get_output()\n", ]
"print(best_run)\n", },
"print(fitted_model)" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "markdown", "outputs": [],
"metadata": {}, "source": [
"source": [ "from azureml.train.automl.automlexplainer import retrieve_model_explanation\n",
"### Best Model 's explanation\n", "\n",
"\n", "shap_values, expected_values, overall_summary, overall_imp, per_class_summary, per_class_imp = \\\n",
"Retrieve the explanation from the best_run. And explanation information includes:\n", " retrieve_model_explanation(best_run)"
"\n", ]
"1.\tshap_values: The explanation information generated by shap lib\n", },
"2.\texpected_values: The expected value of the model applied to set of X_train data.\n", {
"3.\toverall_summary: The model level feature importance values sorted in descending order\n", "cell_type": "code",
"4.\toverall_imp: The feature names sorted in the same order as in overall_summary\n", "execution_count": null,
"5.\tper_class_summary: The class level feature importance values sorted in descending order. Only available for the classification case\n", "metadata": {},
"6.\tper_class_imp: The feature names sorted in the same order as in per_class_summary. Only available for the classification case" "outputs": [],
] "source": [
}, "print(overall_summary)\n",
{ "print(overall_imp)"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"from azureml.train.automl.automlexplainer import retrieve_model_explanation\n", "metadata": {},
"\n", "outputs": [],
"shap_values, expected_values, overall_summary, overall_imp, per_class_summary, per_class_imp = \\\n", "source": [
" retrieve_model_explanation(best_run)" "print(per_class_summary)\n",
] "print(per_class_imp)"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "markdown",
"metadata": {}, "metadata": {},
"outputs": [], "source": [
"source": [ "Beside retrieve the existed model explanation information, explain the model with different train/test data"
"print(overall_summary)\n", ]
"print(overall_imp)" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from azureml.train.automl.automlexplainer import explain_model\n",
"source": [ "\n",
"print(per_class_summary)\n", "shap_values, expected_values, overall_summary, overall_imp, per_class_summary, per_class_imp = \\\n",
"print(per_class_imp)" " explain_model(fitted_model, X_train, X_test)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "code",
"metadata": {}, "execution_count": null,
"source": [ "metadata": {},
"Beside retrieve the existed model explanation information, explain the model with different train/test data" "outputs": [],
] "source": [
}, "print(overall_summary)\n",
{ "print(overall_imp)"
"cell_type": "code", ]
"execution_count": null, }
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.automl.automlexplainer import explain_model\n",
"\n",
"shap_values, expected_values, overall_summary, overall_imp, per_class_summary, per_class_imp = \\\n",
" explain_model(fitted_model, X_train, X_test)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(overall_summary)\n",
"print(overall_imp)"
]
}
],
"metadata": {
"authors": [
{
"name": "xif"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "xif"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,424 +1,400 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Regression with Local Compute**_\n", "_**Regression with Local Compute**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Data](#Data)\n", "1. [Data](#Data)\n",
"1. [Train](#Train)\n", "1. [Train](#Train)\n",
"1. [Results](#Results)\n", "1. [Results](#Results)\n",
"1. [Test](#Test)\n" "1. [Test](#Test)\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"In this example we use the scikit-learn's [diabetes dataset](http://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) to showcase how you can use AutoML for a simple regression problem.\n", "In this example we use the scikit-learn's [diabetes dataset](http://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) to showcase how you can use AutoML for a simple regression problem.\n",
"\n", "\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you will learn how to:\n", "In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n", "1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n", "2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n", "3. Train the model using local compute.\n",
"4. Explore the results.\n", "4. Explore the results.\n",
"5. Test the best fitted model." "5. Test the best fitted model."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup\n",
"\n", "\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "import logging\n",
"import os\n", "\n",
"import random\n", "from matplotlib import pyplot as plt\n",
"\n", "import numpy as np\n",
"from matplotlib import pyplot as plt\n", "import pandas as pd\n",
"from matplotlib.pyplot import imshow\n", "\n",
"import numpy as np\n", "import azureml.core\n",
"import pandas as pd\n", "from azureml.core.experiment import Experiment\n",
"from sklearn import datasets\n", "from azureml.core.workspace import Workspace\n",
"\n", "from azureml.train.automl import AutoMLConfig"
"import azureml.core\n", ]
"from azureml.core.experiment import Experiment\n", },
"from azureml.core.workspace import Workspace\n", {
"from azureml.train.automl import AutoMLConfig\n", "cell_type": "code",
"from azureml.train.automl.run import AutoMLRun" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "ws = Workspace.from_config()\n",
"execution_count": null, "\n",
"metadata": {}, "# Choose a name for the experiment and specify the project folder.\n",
"outputs": [], "experiment_name = 'automl-local-regression'\n",
"source": [ "project_folder = './sample_projects/automl-local-regression'\n",
"ws = Workspace.from_config()\n", "\n",
"\n", "experiment = Experiment(ws, experiment_name)\n",
"# Choose a name for the experiment and specify the project folder.\n", "\n",
"experiment_name = 'automl-local-regression'\n", "output = {}\n",
"project_folder = './sample_projects/automl-local-regression'\n", "output['SDK version'] = azureml.core.VERSION\n",
"\n", "output['Subscription ID'] = ws.subscription_id\n",
"experiment = Experiment(ws, experiment_name)\n", "output['Workspace Name'] = ws.name\n",
"\n", "output['Resource Group'] = ws.resource_group\n",
"output = {}\n", "output['Location'] = ws.location\n",
"output['SDK version'] = azureml.core.VERSION\n", "output['Project Directory'] = project_folder\n",
"output['Subscription ID'] = ws.subscription_id\n", "output['Experiment Name'] = experiment.name\n",
"output['Workspace Name'] = ws.name\n", "pd.set_option('display.max_colwidth', -1)\n",
"output['Resource Group'] = ws.resource_group\n", "outputDf = pd.DataFrame(data = output, index = [''])\n",
"output['Location'] = ws.location\n", "outputDf.T"
"output['Project Directory'] = project_folder\n", ]
"output['Experiment Name'] = experiment.name\n", },
"pd.set_option('display.max_colwidth', -1)\n", {
"pd.DataFrame(data = output, index = ['']).T" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Data\n",
"cell_type": "markdown", "This uses scikit-learn's [load_diabetes](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html) method."
"metadata": {}, ]
"source": [ },
"Opt-in diagnostics for better experience, quality, and security of future releases." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "# Load the diabetes dataset, a well-known built-in small dataset that comes with scikit-learn.\n",
"outputs": [], "from sklearn.datasets import load_diabetes\n",
"source": [ "from sklearn.model_selection import train_test_split\n",
"from azureml.telemetry import set_diagnostics_collection\n", "\n",
"set_diagnostics_collection(send_diagnostics = True)" "X, y = load_diabetes(return_X_y = True)\n",
] "\n",
}, "columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n",
{ "\n",
"cell_type": "markdown", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)"
"metadata": {}, ]
"source": [ },
"## Data\n", {
"This uses scikit-learn's [load_diabetes](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html) method." "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Train\n",
"cell_type": "code", "\n",
"execution_count": null, "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"metadata": {}, "\n",
"outputs": [], "|Property|Description|\n",
"source": [ "|-|-|\n",
"# Load the diabetes dataset, a well-known built-in small dataset that comes with scikit-learn.\n", "|**task**|classification or regression|\n",
"from sklearn.datasets import load_diabetes\n", "|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|\n",
"from sklearn.model_selection import train_test_split\n", "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"\n", "|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"X, y = load_diabetes(return_X_y = True)\n", "|**n_cross_validations**|Number of cross validation splits.|\n",
"\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n", "|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"\n", "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "markdown", "execution_count": null,
"metadata": {}, "metadata": {},
"source": [ "outputs": [],
"## Train\n", "source": [
"\n", "automl_config = AutoMLConfig(task = 'regression',\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", " iteration_timeout_minutes = 10,\n",
"\n", " iterations = 10,\n",
"|Property|Description|\n", " primary_metric = 'spearman_correlation',\n",
"|-|-|\n", " n_cross_validations = 5,\n",
"|**task**|classification or regression|\n", " debug_log = 'automl.log',\n",
"|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|\n", " verbosity = logging.INFO,\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", " X = X_train, \n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n", " y = y_train,\n",
"|**n_cross_validations**|Number of cross validation splits.|\n", " path = project_folder)"
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", ]
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n", },
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "code", "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"execution_count": null, "In this example, we specify `show_output = True` to print currently running iterations to the console."
"metadata": {}, ]
"outputs": [], },
"source": [ {
"automl_config = AutoMLConfig(task = 'regression',\n", "cell_type": "code",
" iteration_timeout_minutes = 10,\n", "execution_count": null,
" iterations = 10,\n", "metadata": {},
" primary_metric = 'spearman_correlation',\n", "outputs": [],
" n_cross_validations = 5,\n", "source": [
" debug_log = 'automl.log',\n", "local_run = experiment.submit(automl_config, show_output = True)"
" verbosity = logging.INFO,\n", ]
" X = X_train, \n", },
" y = y_train,\n", {
" path = project_folder)" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "markdown", "source": [
"metadata": {}, "local_run"
"source": [ ]
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", },
"In this example, we specify `show_output = True` to print currently running iterations to the console." {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "code", "## Results"
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"local_run = experiment.submit(automl_config, show_output = True)" "metadata": {},
] "source": [
}, "#### Widget for Monitoring Runs\n",
{ "\n",
"cell_type": "code", "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"execution_count": null, "\n",
"metadata": {}, "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
"outputs": [], ]
"source": [ },
"local_run" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "markdown", "outputs": [],
"metadata": {}, "source": [
"source": [ "from azureml.widgets import RunDetails\n",
"## Results" "RunDetails(local_run).show() "
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"#### Widget for Monitoring Runs\n", "\n",
"\n", "#### Retrieve All Child Runs\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", "You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
"\n", ]
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "children = list(local_run.get_children())\n",
"source": [ "metricslist = {}\n",
"from azureml.widgets import RunDetails\n", "for run in children:\n",
"RunDetails(local_run).show() " " properties = run.get_properties()\n",
] " metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
}, " metricslist[int(properties['iteration'])] = metrics\n",
{ "\n",
"cell_type": "markdown", "rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"metadata": {}, "rundata"
"source": [ ]
"\n", },
"#### Retrieve All Child Runs\n", {
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log." "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "### Retrieve the Best Model\n",
"cell_type": "code", "\n",
"execution_count": null, "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
"metadata": {}, ]
"outputs": [], },
"source": [ {
"children = list(local_run.get_children())\n", "cell_type": "code",
"metricslist = {}\n", "execution_count": null,
"for run in children:\n", "metadata": {},
" properties = run.get_properties()\n", "outputs": [],
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n", "source": [
" metricslist[int(properties['iteration'])] = metrics\n", "best_run, fitted_model = local_run.get_output()\n",
"\n", "print(best_run)\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n", "print(fitted_model)"
"rundata" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "#### Best Model Based on Any Other Metric\n",
"### Retrieve the Best Model\n", "Show the run and the model that has the smallest `root_mean_squared_error` value (which turned out to be the same as the one with largest `spearman_correlation` value):"
"\n", ]
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "lookup_metric = \"root_mean_squared_error\"\n",
"source": [ "best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"best_run, fitted_model = local_run.get_output()\n", "print(best_run)\n",
"print(best_run)\n", "print(fitted_model)"
"print(fitted_model)" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "#### Model from a Specific Iteration\n",
"#### Best Model Based on Any Other Metric\n", "Show the run and the model from the third iteration:"
"Show the run and the model that has the smallest `root_mean_squared_error` value (which turned out to be the same as the one with largest `spearman_correlation` value):" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "iteration = 3\n",
"lookup_metric = \"root_mean_squared_error\"\n", "third_run, third_model = local_run.get_output(iteration = iteration)\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n", "print(third_run)\n",
"print(best_run)\n", "print(third_model)"
"print(fitted_model)" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "## Test"
"#### Model from a Specific Iteration\n", ]
"Show the run and the model from the third iteration:" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "code", "source": [
"execution_count": null, "Predict on training and test set, and calculate residual values."
"metadata": {}, ]
"outputs": [], },
"source": [ {
"iteration = 3\n", "cell_type": "code",
"third_run, third_model = local_run.get_output(iteration = iteration)\n", "execution_count": null,
"print(third_run)\n", "metadata": {},
"print(third_model)" "outputs": [],
] "source": [
}, "y_pred_train = fitted_model.predict(X_train)\n",
{ "y_residual_train = y_train - y_pred_train\n",
"cell_type": "markdown", "\n",
"metadata": {}, "y_pred_test = fitted_model.predict(X_test)\n",
"source": [ "y_residual_test = y_test - y_pred_test"
"## Test" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "markdown", "execution_count": null,
"metadata": {}, "metadata": {},
"source": [ "outputs": [],
"Predict on training and test set, and calculate residual values." "source": [
] "%matplotlib inline\n",
}, "from sklearn.metrics import mean_squared_error, r2_score\n",
{ "\n",
"cell_type": "code", "# Set up a multi-plot chart.\n",
"execution_count": null, "f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})\n",
"metadata": {}, "f.suptitle('Regression Residual Values', fontsize = 18)\n",
"outputs": [], "f.set_figheight(6)\n",
"source": [ "f.set_figwidth(16)\n",
"y_pred_train = fitted_model.predict(X_train)\n", "\n",
"y_residual_train = y_train - y_pred_train\n", "# Plot residual values of training set.\n",
"\n", "a0.axis([0, 360, -200, 200])\n",
"y_pred_test = fitted_model.predict(X_test)\n", "a0.plot(y_residual_train, 'bo', alpha = 0.5)\n",
"y_residual_test = y_test - y_pred_test" "a0.plot([-10,360],[0,0], 'r-', lw = 3)\n",
] "a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)\n",
}, "a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)), fontsize = 12)\n",
{ "a0.set_xlabel('Training samples', fontsize = 12)\n",
"cell_type": "code", "a0.set_ylabel('Residual Values', fontsize = 12)\n",
"execution_count": null, "\n",
"metadata": {}, "# Plot a histogram.\n",
"outputs": [], "a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step')\n",
"source": [ "a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10)\n",
"%matplotlib inline\n", "\n",
"import matplotlib.pyplot as plt\n", "# Plot residual values of test set.\n",
"import numpy as np\n", "a1.axis([0, 90, -200, 200])\n",
"from sklearn import datasets\n", "a1.plot(y_residual_test, 'bo', alpha = 0.5)\n",
"from sklearn.metrics import mean_squared_error, r2_score\n", "a1.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"\n", "a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)\n",
"# Set up a multi-plot chart.\n", "a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)), fontsize = 12)\n",
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})\n", "a1.set_xlabel('Test samples', fontsize = 12)\n",
"f.suptitle('Regression Residual Values', fontsize = 18)\n", "a1.set_yticklabels([])\n",
"f.set_figheight(6)\n", "\n",
"f.set_figwidth(16)\n", "# Plot a histogram.\n",
"\n", "a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step')\n",
"# Plot residual values of training set.\n", "a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10)\n",
"a0.axis([0, 360, -200, 200])\n", "\n",
"a0.plot(y_residual_train, 'bo', alpha = 0.5)\n", "plt.show()"
"a0.plot([-10,360],[0,0], 'r-', lw = 3)\n", ]
"a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)\n", }
"a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)), fontsize = 12)\n",
"a0.set_xlabel('Training samples', fontsize = 12)\n",
"a0.set_ylabel('Residual Values', fontsize = 12)\n",
"\n",
"# Plot a histogram.\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step');\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10);\n",
"\n",
"# Plot residual values of test set.\n",
"a1.axis([0, 90, -200, 200])\n",
"a1.plot(y_residual_test, 'bo', alpha = 0.5)\n",
"a1.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)\n",
"a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)), fontsize = 12)\n",
"a1.set_xlabel('Test samples', fontsize = 12)\n",
"a1.set_yticklabels([])\n",
"\n",
"# Plot a histogram.\n",
"a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step')\n",
"a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10)\n",
"\n",
"plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,260 +1,240 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Sample Weight**_\n", "_**Sample Weight**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Train](#Train)\n", "1. [Train](#Train)\n",
"1. [Test](#Test)\n" "1. [Test](#Test)\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use sample weight with AutoML. Sample weight is used where some sample values are more important than others.\n", "In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use sample weight with AutoML. Sample weight is used where some sample values are more important than others.\n",
"\n", "\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you will learn how to configure AutoML to use `sample_weight` and you will see the difference sample weight makes to the test results." "In this notebook you will learn how to configure AutoML to use `sample_weight` and you will see the difference sample weight makes to the test results."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup\n",
"\n", "\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "import logging\n",
"import os\n", "\n",
"import random\n", "from matplotlib import pyplot as plt\n",
"\n", "import numpy as np\n",
"from matplotlib import pyplot as plt\n", "import pandas as pd\n",
"from matplotlib.pyplot import imshow\n", "from sklearn import datasets\n",
"import numpy as np\n", "\n",
"import pandas as pd\n", "import azureml.core\n",
"from sklearn import datasets\n", "from azureml.core.experiment import Experiment\n",
"\n", "from azureml.core.workspace import Workspace\n",
"import azureml.core\n", "from azureml.train.automl import AutoMLConfig"
"from azureml.core.experiment import Experiment\n", ]
"from azureml.core.workspace import Workspace\n", },
"from azureml.train.automl import AutoMLConfig\n", {
"from azureml.train.automl.run import AutoMLRun" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "ws = Workspace.from_config()\n",
"metadata": {}, "\n",
"outputs": [], "# Choose names for the regular and the sample weight experiments.\n",
"source": [ "experiment_name = 'non_sample_weight_experiment'\n",
"ws = Workspace.from_config()\n", "sample_weight_experiment_name = 'sample_weight_experiment'\n",
"\n", "\n",
"# Choose names for the regular and the sample weight experiments.\n", "project_folder = './sample_projects/sample_weight'\n",
"experiment_name = 'non_sample_weight_experiment'\n", "\n",
"sample_weight_experiment_name = 'sample_weight_experiment'\n", "experiment = Experiment(ws, experiment_name)\n",
"\n", "sample_weight_experiment=Experiment(ws, sample_weight_experiment_name)\n",
"project_folder = './sample_projects/automl-local-classification'\n", "\n",
"\n", "output = {}\n",
"experiment = Experiment(ws, experiment_name)\n", "output['SDK version'] = azureml.core.VERSION\n",
"sample_weight_experiment=Experiment(ws, sample_weight_experiment_name)\n", "output['Subscription ID'] = ws.subscription_id\n",
"\n", "output['Workspace Name'] = ws.name\n",
"output = {}\n", "output['Resource Group'] = ws.resource_group\n",
"output['SDK version'] = azureml.core.VERSION\n", "output['Location'] = ws.location\n",
"output['Subscription ID'] = ws.subscription_id\n", "output['Project Directory'] = project_folder\n",
"output['Workspace Name'] = ws.name\n", "output['Experiment Name'] = experiment.name\n",
"output['Resource Group'] = ws.resource_group\n", "pd.set_option('display.max_colwidth', -1)\n",
"output['Location'] = ws.location\n", "outputDf = pd.DataFrame(data = output, index = [''])\n",
"output['Project Directory'] = project_folder\n", "outputDf.T"
"output['Experiment Name'] = experiment.name\n", ]
"pd.set_option('display.max_colwidth', -1)\n", },
"pd.DataFrame(data = output, index = ['']).T" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## Train\n",
"metadata": {}, "\n",
"source": [ "Instantiate two `AutoMLConfig` objects. One will be used with `sample_weight` and one without."
"Opt-in diagnostics for better experience, quality, and security of future releases." ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "digits = datasets.load_digits()\n",
"from azureml.telemetry import set_diagnostics_collection\n", "X_train = digits.data[100:,:]\n",
"set_diagnostics_collection(send_diagnostics = True)" "y_train = digits.target[100:]\n",
] "\n",
}, "# The example makes the sample weight 0.9 for the digit 4 and 0.1 for all other digits.\n",
{ "# This makes the model more likely to classify as 4 if the image it not clear.\n",
"cell_type": "markdown", "sample_weight = np.array([(0.9 if x == 4 else 0.01) for x in y_train])\n",
"metadata": {}, "\n",
"source": [ "automl_classifier = AutoMLConfig(task = 'classification',\n",
"## Train\n", " debug_log = 'automl_errors.log',\n",
"\n", " primary_metric = 'AUC_weighted',\n",
"Instantiate two `AutoMLConfig` objects. One will be used with `sample_weight` and one without." " iteration_timeout_minutes = 60,\n",
] " iterations = 10,\n",
}, " n_cross_validations = 2,\n",
{ " verbosity = logging.INFO,\n",
"cell_type": "code", " X = X_train, \n",
"execution_count": null, " y = y_train,\n",
"metadata": {}, " path = project_folder)\n",
"outputs": [], "\n",
"source": [ "automl_sample_weight = AutoMLConfig(task = 'classification',\n",
"digits = datasets.load_digits()\n", " debug_log = 'automl_errors.log',\n",
"X_train = digits.data[100:,:]\n", " primary_metric = 'AUC_weighted',\n",
"y_train = digits.target[100:]\n", " iteration_timeout_minutes = 60,\n",
"\n", " iterations = 10,\n",
"# The example makes the sample weight 0.9 for the digit 4 and 0.1 for all other digits.\n", " n_cross_validations = 2,\n",
"# This makes the model more likely to classify as 4 if the image it not clear.\n", " verbosity = logging.INFO,\n",
"sample_weight = np.array([(0.9 if x == 4 else 0.01) for x in y_train])\n", " X = X_train, \n",
"\n", " y = y_train,\n",
"automl_classifier = AutoMLConfig(task = 'classification',\n", " sample_weight = sample_weight,\n",
" debug_log = 'automl_errors.log',\n", " path = project_folder)"
" primary_metric = 'AUC_weighted',\n", ]
" iteration_timeout_minutes = 60,\n", },
" iterations = 10,\n", {
" n_cross_validations = 2,\n", "cell_type": "markdown",
" verbosity = logging.INFO,\n", "metadata": {},
" X = X_train, \n", "source": [
" y = y_train,\n", "Call the `submit` method on the experiment objects and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
" path = project_folder)\n", "In this example, we specify `show_output = True` to print currently running iterations to the console."
"\n", ]
"automl_sample_weight = AutoMLConfig(task = 'classification',\n", },
" debug_log = 'automl_errors.log',\n", {
" primary_metric = 'AUC_weighted',\n", "cell_type": "code",
" iteration_timeout_minutes = 60,\n", "execution_count": null,
" iterations = 10,\n", "metadata": {},
" n_cross_validations = 2,\n", "outputs": [],
" verbosity = logging.INFO,\n", "source": [
" X = X_train, \n", "local_run = experiment.submit(automl_classifier, show_output = True)\n",
" y = y_train,\n", "sample_weight_run = sample_weight_experiment.submit(automl_sample_weight, show_output = True)\n",
" sample_weight = sample_weight,\n", "\n",
" path = project_folder)" "best_run, fitted_model = local_run.get_output()\n",
] "best_run_sample_weight, fitted_model_sample_weight = sample_weight_run.get_output()"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"Call the `submit` method on the experiment objects and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", "source": [
"In this example, we specify `show_output = True` to print currently running iterations to the console." "## Test\n",
] "\n",
}, "#### Load Test Data"
{ ]
"cell_type": "code", },
"execution_count": null, {
"metadata": {}, "cell_type": "code",
"outputs": [], "execution_count": null,
"source": [ "metadata": {},
"local_run = experiment.submit(automl_classifier, show_output = True)\n", "outputs": [],
"sample_weight_run = sample_weight_experiment.submit(automl_sample_weight, show_output = True)\n", "source": [
"\n", "digits = datasets.load_digits()\n",
"best_run, fitted_model = local_run.get_output()\n", "X_test = digits.data[:100, :]\n",
"best_run_sample_weight, fitted_model_sample_weight = sample_weight_run.get_output()" "y_test = digits.target[:100]\n",
] "images = digits.images[:100]"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"## Test\n", "source": [
"\n", "#### Compare the Models\n",
"#### Load Test Data" "The prediction from the sample weight model is more likely to correctly predict 4's. However, it is also more likely to predict 4 for some images that are not labelled as 4."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"digits = datasets.load_digits()\n", "# Randomly select digits and test.\n",
"X_test = digits.data[:100, :]\n", "for index in range(0,len(y_test)):\n",
"y_test = digits.target[:100]\n", " predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
"images = digits.images[:100]" " predicted_sample_weight = fitted_model_sample_weight.predict(X_test[index:index + 1])[0]\n",
] " label = y_test[index]\n",
}, " if predicted == 4 or predicted_sample_weight == 4 or label == 4:\n",
{ " title = \"Label value = %d Predicted value = %d Prediced with sample weight = %d\" % (label, predicted, predicted_sample_weight)\n",
"cell_type": "markdown", " fig = plt.figure(1, figsize=(3,3))\n",
"metadata": {}, " ax1 = fig.add_axes((0,0,.8,.8))\n",
"source": [ " ax1.set_title(title)\n",
"#### Compare the Models\n", " plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
"The prediction from the sample weight model is more likely to correctly predict 4's. However, it is also more likely to predict 4 for some images that are not labelled as 4." " plt.show()"
] ]
}, }
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in range(0,len(y_test)):\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" predicted_sample_weight = fitted_model_sample_weight.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" if predicted == 4 or predicted_sample_weight == 4 or label == 4:\n",
" title = \"Label value = %d Predicted value = %d Prediced with sample weight = %d\" % (label, predicted, predicted_sample_weight)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,403 +1,380 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Automated Machine Learning\n", "# Automated Machine Learning\n",
"_**Train Test Split and Handling Sparse Data**_\n", "_**Train Test Split and Handling Sparse Data**_\n",
"\n", "\n",
"## Contents\n", "## Contents\n",
"1. [Introduction](#Introduction)\n", "1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n", "1. [Setup](#Setup)\n",
"1. [Data](#Data)\n", "1. [Data](#Data)\n",
"1. [Train](#Train)\n", "1. [Train](#Train)\n",
"1. [Results](#Results)\n", "1. [Results](#Results)\n",
"1. [Test](#Test)\n" "1. [Test](#Test)\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Introduction\n", "## Introduction\n",
"In this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML for handling sparse data and how to specify custom cross validations splits.\n", "In this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML for handling sparse data and how to specify custom cross validations splits.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n", "\n",
"In this notebook you will learn how to:\n", "In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n", "1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n", "2. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model.\n", "4. Train the model.\n",
"5. Explore the results.\n", "5. Explore the results.\n",
"6. Test the best fitted model.\n", "6. Test the best fitted model.\n",
"\n", "\n",
"In addition this notebook showcases the following features\n", "In addition this notebook showcases the following features\n",
"- Explicit train test splits \n", "- Explicit train test splits \n",
"- Handling **sparse data** in the input" "- Handling **sparse data** in the input"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup\n", "## Setup\n",
"\n", "\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import logging\n", "import logging\n",
"import os\n", "\n",
"import random\n", "import pandas as pd\n",
"\n", "\n",
"from matplotlib import pyplot as plt\n", "import azureml.core\n",
"from matplotlib.pyplot import imshow\n", "from azureml.core.experiment import Experiment\n",
"import numpy as np\n", "from azureml.core.workspace import Workspace\n",
"import pandas as pd\n", "from azureml.train.automl import AutoMLConfig"
"from sklearn import datasets\n", ]
"\n", },
"import azureml.core\n", {
"from azureml.core.experiment import Experiment\n", "cell_type": "code",
"from azureml.core.workspace import Workspace\n", "execution_count": null,
"from azureml.train.automl import AutoMLConfig\n", "metadata": {},
"from azureml.train.automl.run import AutoMLRun" "outputs": [],
] "source": [
}, "ws = Workspace.from_config()\n",
{ "\n",
"cell_type": "code", "# choose a name for the experiment\n",
"execution_count": null, "experiment_name = 'sparse-data-train-test-split'\n",
"metadata": {}, "# project folder\n",
"outputs": [], "project_folder = './sample_projects/sparse-data-train-test-split'\n",
"source": [ "\n",
"ws = Workspace.from_config()\n", "experiment = Experiment(ws, experiment_name)\n",
"\n", "\n",
"# choose a name for the experiment\n", "output = {}\n",
"experiment_name = 'automl-local-missing-data'\n", "output['SDK version'] = azureml.core.VERSION\n",
"# project folder\n", "output['Subscription ID'] = ws.subscription_id\n",
"project_folder = './sample_projects/automl-local-missing-data'\n", "output['Workspace'] = ws.name\n",
"\n", "output['Resource Group'] = ws.resource_group\n",
"experiment = Experiment(ws, experiment_name)\n", "output['Location'] = ws.location\n",
"\n", "output['Project Directory'] = project_folder\n",
"output = {}\n", "output['Experiment Name'] = experiment.name\n",
"output['SDK version'] = azureml.core.VERSION\n", "pd.set_option('display.max_colwidth', -1)\n",
"output['Subscription ID'] = ws.subscription_id\n", "outputDf = pd.DataFrame(data = output, index = [''])\n",
"output['Workspace'] = ws.name\n", "outputDf.T"
"output['Resource Group'] = ws.resource_group\n", ]
"output['Location'] = ws.location\n", },
"output['Project Directory'] = project_folder\n", {
"output['Experiment Name'] = experiment.name\n", "cell_type": "markdown",
"pd.set_option('display.max_colwidth', -1)\n", "metadata": {},
"pd.DataFrame(data=output, index=['']).T" "source": [
] "## Data"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "code",
"source": [ "execution_count": null,
"Opt-in diagnostics for better experience, quality, and security of future releases." "metadata": {},
] "outputs": [],
}, "source": [
{ "from sklearn.datasets import fetch_20newsgroups\n",
"cell_type": "code", "from sklearn.feature_extraction.text import HashingVectorizer\n",
"execution_count": null, "from sklearn.model_selection import train_test_split\n",
"metadata": {}, "\n",
"outputs": [], "remove = ('headers', 'footers', 'quotes')\n",
"source": [ "categories = [\n",
"from azureml.telemetry import set_diagnostics_collection\n", " 'alt.atheism',\n",
"set_diagnostics_collection(send_diagnostics = True)" " 'talk.religion.misc',\n",
] " 'comp.graphics',\n",
}, " 'sci.space',\n",
{ "]\n",
"cell_type": "markdown", "data_train = fetch_20newsgroups(subset = 'train', categories = categories,\n",
"metadata": {}, " shuffle = True, random_state = 42,\n",
"source": [ " remove = remove)\n",
"## Data" "\n",
] "X_train, X_valid, y_train, y_valid = train_test_split(data_train.data, data_train.target, test_size = 0.33, random_state = 42)\n",
}, "\n",
{ "\n",
"cell_type": "code", "vectorizer = HashingVectorizer(stop_words = 'english', alternate_sign = False,\n",
"execution_count": null, " n_features = 2**16)\n",
"metadata": {}, "X_train = vectorizer.transform(X_train)\n",
"outputs": [], "X_valid = vectorizer.transform(X_valid)\n",
"source": [ "\n",
"from sklearn.datasets import fetch_20newsgroups\n", "summary_df = pd.DataFrame(index = ['No of Samples', 'No of Features'])\n",
"from sklearn.feature_extraction.text import HashingVectorizer\n", "summary_df['Train Set'] = [X_train.shape[0], X_train.shape[1]]\n",
"from sklearn.model_selection import train_test_split\n", "summary_df['Validation Set'] = [X_valid.shape[0], X_valid.shape[1]]\n",
"\n", "summary_df"
"remove = ('headers', 'footers', 'quotes')\n", ]
"categories = [\n", },
" 'alt.atheism',\n", {
" 'talk.religion.misc',\n", "cell_type": "markdown",
" 'comp.graphics',\n", "metadata": {},
" 'sci.space',\n", "source": [
"]\n", "## Train\n",
"data_train = fetch_20newsgroups(subset = 'train', categories = categories,\n", "\n",
" shuffle = True, random_state = 42,\n", "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
" remove = remove)\n", "\n",
"\n", "|Property|Description|\n",
"X_train, X_valid, y_train, y_valid = train_test_split(data_train.data, data_train.target, test_size = 0.33, random_state = 42)\n", "|-|-|\n",
"\n", "|**task**|classification or regression|\n",
"\n", "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"vectorizer = HashingVectorizer(stop_words = 'english', alternate_sign = False,\n", "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
" n_features = 2**16)\n", "|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"X_train = vectorizer.transform(X_train)\n", "|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.<br>**Note:** If input data is sparse, you cannot use *True*.|\n",
"X_valid = vectorizer.transform(X_valid)\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"\n", "|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"summary_df = pd.DataFrame(index = ['No of Samples', 'No of Features'])\n", "|**X_valid**|(sparse) array-like, shape = [n_samples, n_features] for the custom validation set.|\n",
"summary_df['Train Set'] = [X_train.shape[0], X_train.shape[1]]\n", "|**y_valid**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification for the custom validation set.|\n",
"summary_df['Validation Set'] = [X_valid.shape[0], X_valid.shape[1]]\n", "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
"summary_df" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "markdown", "execution_count": null,
"metadata": {}, "metadata": {},
"source": [ "outputs": [],
"## Train\n", "source": [
"\n", "automl_config = AutoMLConfig(task = 'classification',\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", " debug_log = 'automl_errors.log',\n",
"\n", " primary_metric = 'AUC_weighted',\n",
"|Property|Description|\n", " iteration_timeout_minutes = 60,\n",
"|-|-|\n", " iterations = 5,\n",
"|**task**|classification or regression|\n", " preprocess = False,\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n", " verbosity = logging.INFO,\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", " X = X_train, \n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n", " y = y_train,\n",
"|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.<br>**Note:** If input data is sparse, you cannot use *True*.|\n", " X_valid = X_valid, \n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", " y_valid = y_valid, \n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n", " path = project_folder)"
"|**X_valid**|(sparse) array-like, shape = [n_samples, n_features] for the custom validation set.|\n", ]
"|**y_valid**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification for the custom validation set.|\n", },
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "code", "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"execution_count": null, "In this example, we specify `show_output = True` to print currently running iterations to the console."
"metadata": {}, ]
"outputs": [], },
"source": [ {
"automl_config = AutoMLConfig(task = 'classification',\n", "cell_type": "code",
" debug_log = 'automl_errors.log',\n", "execution_count": null,
" primary_metric = 'AUC_weighted',\n", "metadata": {},
" iteration_timeout_minutes = 60,\n", "outputs": [],
" iterations = 5,\n", "source": [
" preprocess = False,\n", "local_run = experiment.submit(automl_config, show_output=True)"
" verbosity = logging.INFO,\n", ]
" X = X_train, \n", },
" y = y_train,\n", {
" X_valid = X_valid, \n", "cell_type": "code",
" y_valid = y_valid, \n", "execution_count": null,
" path = project_folder)" "metadata": {},
] "outputs": [],
}, "source": [
{ "local_run"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", "cell_type": "markdown",
"In this example, we specify `show_output = True` to print currently running iterations to the console." "metadata": {},
] "source": [
}, "## Results"
{ ]
"cell_type": "code", },
"execution_count": null, {
"metadata": {}, "cell_type": "markdown",
"outputs": [], "metadata": {},
"source": [ "source": [
"local_run = experiment.submit(automl_config, show_output=True)" "#### Widget for Monitoring Runs\n",
] "\n",
}, "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
{ "\n",
"cell_type": "code", "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "code",
"local_run" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "markdown", "from azureml.widgets import RunDetails\n",
"metadata": {}, "RunDetails(local_run).show() "
"source": [ ]
"## Results" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "\n",
"source": [ "#### Retrieve All Child Runs\n",
"#### Widget for Monitoring Runs\n", "You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
"\n", ]
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", },
"\n", {
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details." "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "children = list(local_run.get_children())\n",
"metadata": {}, "metricslist = {}\n",
"outputs": [], "for run in children:\n",
"source": [ " properties = run.get_properties()\n",
"from azureml.widgets import RunDetails\n", " metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
"RunDetails(local_run).show() " " metricslist[int(properties['iteration'])] = metrics\n",
] " \n",
}, "rundata = pd.DataFrame(metricslist).sort_index(1)\n",
{ "rundata"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"\n", "cell_type": "markdown",
"#### Retrieve All Child Runs\n", "metadata": {},
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log." "source": [
] "### Retrieve the Best Model\n",
}, "\n",
{ "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"children = list(local_run.get_children())\n", "metadata": {},
"metricslist = {}\n", "outputs": [],
"for run in children:\n", "source": [
" properties = run.get_properties()\n", "best_run, fitted_model = local_run.get_output()"
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n", ]
" metricslist[int(properties['iteration'])] = metrics\n", },
" \n", {
"rundata = pd.DataFrame(metricslist).sort_index(1)\n", "cell_type": "markdown",
"rundata" "metadata": {},
] "source": [
}, "#### Best Model Based on Any Other Metric\n",
{ "Show the run and the model which has the smallest `accuracy` value:"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Retrieve the Best Model\n", "cell_type": "code",
"\n", "execution_count": null,
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." "metadata": {},
] "outputs": [],
}, "source": [
{ "# lookup_metric = \"accuracy\"\n",
"cell_type": "code", "# best_run, fitted_model = local_run.get_output(metric = lookup_metric)"
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"best_run, fitted_model = local_run.get_output()" "metadata": {},
] "source": [
}, "#### Model from a Specific Iteration\n",
{ "Show the run and the model from the third iteration:"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"#### Best Model Based on Any Other Metric\n", "cell_type": "code",
"Show the run and the model which has the smallest `accuracy` value:" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "# iteration = 3\n",
"execution_count": null, "# best_run, fitted_model = local_run.get_output(iteration = iteration)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"# lookup_metric = \"accuracy\"\n", "cell_type": "markdown",
"# best_run, fitted_model = local_run.get_output(metric = lookup_metric)" "metadata": {},
] "source": [
}, "## Test"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "code",
"#### Model from a Specific Iteration\n", "execution_count": null,
"Show the run and the model from the third iteration:" "metadata": {},
] "outputs": [],
}, "source": [
{ "# Load test data.\n",
"cell_type": "code", "from pandas_ml import ConfusionMatrix\n",
"execution_count": null, "\n",
"metadata": {}, "data_test = fetch_20newsgroups(subset = 'test', categories = categories,\n",
"outputs": [], " shuffle = True, random_state = 42,\n",
"source": [ " remove = remove)\n",
"# iteration = 3\n", "\n",
"# best_run, fitted_model = local_run.get_output(iteration = iteration)" "X_test = vectorizer.transform(data_test.data)\n",
] "y_test = data_test.target\n",
}, "\n",
{ "# Test our best pipeline.\n",
"cell_type": "markdown", "\n",
"metadata": {}, "y_pred = fitted_model.predict(X_test)\n",
"source": [ "y_pred_strings = [data_test.target_names[i] for i in y_pred]\n",
"## Test" "y_test_strings = [data_test.target_names[i] for i in y_test]\n",
] "\n",
}, "cm = ConfusionMatrix(y_test_strings, y_pred_strings)\n",
{ "print(cm)\n",
"cell_type": "code", "cm.plot()"
"execution_count": null, ]
"metadata": {}, }
"outputs": [],
"source": [
"# Load test data.\n",
"from pandas_ml import ConfusionMatrix\n",
"\n",
"data_test = fetch_20newsgroups(subset = 'test', categories = categories,\n",
" shuffle = True, random_state = 42,\n",
" remove = remove)\n",
"\n",
"X_test = vectorizer.transform(data_test.data)\n",
"y_test = data_test.target\n",
"\n",
"# Test our best pipeline.\n",
"\n",
"y_pred = fitted_model.predict(X_test)\n",
"y_pred_strings = [data_test.target_names[i] for i in y_pred]\n",
"y_test_strings = [data_test.target_names[i] for i in y_test]\n",
"\n",
"cm = ConfusionMatrix(y_test_strings, y_pred_strings)\n",
"print(cm)\n",
"cm.plot()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,201 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification with Local Compute**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we will explore AutoML's subsampling feature. This is useful for training on large datasets to speed up the convergence.\n",
"\n",
"The setup is quiet similar to a normal classification, with the exception of the `enable_subsampling` option. Keep in mind that even with the `enable_subsampling` flag set, subsampling will only be run for large datasets (>= 50k rows) and large (>= 85) or no iteration restrictions.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-subsampling'\n",
"project_folder = './sample_projects/automl-subsampling'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"\n",
"We will create a simple dataset using the numpy sin function just for this example. We need just over 50k rows."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"base = np.arange(60000)\n",
"cos = np.cos(base)\n",
"y = np.round(np.sin(base)).astype('int')\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",
"X_train = np.hstack((base.reshape(-1, 1), cos.reshape(-1, 1)))\n",
"y_train = y"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**enable_subsampling**|This enables subsampling as an option. However it does not guarantee subsampling will be used. It also depends on how large the dataset is and how many iterations it's expected to run at a minimum.|\n",
"|**iterations**|Number of iterations. Subsampling requires a lot of iterations at smaller percent so in order for subsampling to be used we need to set iterations to be a high number.|\n",
"|**experiment_timeout_minutes**|The experiment timeout, it's set to 5 right now to shorten the demo but it should probably be higher if we want to finish all the iterations.|\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'accuracy',\n",
" iterations = 85,\n",
" experiment_timeout_minutes = 5,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" enable_subsampling=True,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "rogehe"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,70 +1,29 @@
Azure Databricks is a managed Spark offering on Azure and customers already use it for advanced analytics. It provides a collaborative Notebook based environment with CPU or GPU based compute cluster. Azure Databricks is a managed Spark offering on Azure and customers already use it for advanced analytics. It provides a collaborative Notebook based environment with CPU or GPU based compute cluster.
In this section, you will see sample notebooks on how to use Azure Machine Learning SDK with Azure Databricks. You can train a model using Spark MLlib and then deploy the model to ACI/AKS from within Azure Databricks. You can also use Automated ML capability (**public preview**) of Azure ML SDK with Azure Databricks. In this section, you will find sample notebooks on how to use Azure Machine Learning SDK with Azure Databricks. You can train a model using Spark MLlib and then deploy the model to ACI/AKS from within Azure Databricks. You can also use Automated ML capability (**public preview**) of Azure ML SDK with Azure Databricks.
- Customers who use Azure Databricks for advanced analytics can now use the same cluster to run experiments with or without automated machine learning. - Customers who use Azure Databricks for advanced analytics can now use the same cluster to run experiments with or without automated machine learning.
- You can keep the data within the same cluster. - You can keep the data within the same cluster.
- You can leverage the local worker nodes with autoscale and auto termination capabilities. - You can leverage the local worker nodes with autoscale and auto termination capabilities.
- You can use multiple cores of your Azure Databricks cluster to perform simultenous training. - You can use multiple cores of your Azure Databricks cluster to perform simultenous training.
- You can further tune the model generated by automated machine learning if you chose to. - You can further tune the model generated by automated machine learning if you chose to.
- Every run (including the best run) is available as a pipeline. - Every run (including the best run) is available as a pipeline, which you can tune further if needed.
- The model trained using Azure Databricks can be registered in Azure ML SDK workspace and then deployed to Azure managed compute (ACI or AKS) using the Azure Machine learning SDK. - The model trained using Azure Databricks can be registered in Azure ML SDK workspace and then deployed to Azure managed compute (ACI or AKS) using the Azure Machine learning SDK.
**Create Azure Databricks Cluster:** Please follow our [Azure doc](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#azure-databricks) to install the sdk in your Azure Databricks cluster before trying any of the sample notebooks.
Select New Cluster and fill in following detail: **Single file** -
- Cluster name: _yourclustername_ The following archive contains all the sample notebooks. You can the run notebooks after importing [DBC](Databricks_AMLSDK_1-4_6.dbc) in your Databricks workspace instead of downloading individually.
- Databricks Runtime: Any 4.x runtime.
- Python version: **3**
- Workers: 2 or higher.
These settings are only for using Automated Machine Learning on Databricks. Notebooks 1-4 have to be run sequentially & are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks.
- Max. number of **concurrent iterations** in Automated ML settings is **<=** to the number of **worker nodes** in your Databricks cluster.
- Worker node VM types: **Memory optimized VM** preferred.
- Uncheck _Enable Autoscaling_
Notebook 6 is an Automated ML sample notebook for Classification.
It will take few minutes to create the cluster. Please ensure that the cluster state is running before proceeding further.
**Install Azure ML SDK without Automated ML capability on your Azure Databricks cluster**
- Select Import library
- Source: Upload Python Egg or PyPI
- PyPi Name: **azureml-sdk[databricks]**
**Install Azure ML with Automated ML SDK on your Azure Databricks cluster**
- Select Import library
- Source: Upload Python Egg or PyPI
- PyPi Name: **azureml-sdk[automl_databricks]**
**For installation with or without Automated ML**
- Click Install Library
- Do not select _Attach automatically to all clusters_. In case you have selected earlier then you can go to your Home folder and deselect it.
- Select the check box _Attach_ next to your cluster name
(More details on how to attach and detach libs are here - [https://docs.databricks.com/user-guide/libraries.html#attach-a-library-to-a-cluster](https://docs.databricks.com/user-guide/libraries.html#attach-a-library-to-a-cluster) )
- Ensure that there are no errors until Status changes to _Attached_. It may take a couple of minutes.
**Note** - If you have the old build the please deselect it from clusters installed libs > move to trash. Install the new build and restart the cluster. And if still there is an issue then detach and reattach your cluster.
iPython Notebooks 1-4 have to be run sequentially after making changes based on your subscription. The corresponding DBC archive contains all the notebooks and can be imported into your Databricks workspace. You can the run notebooks after importing [databricks_amlsdk](Databricks_AMLSDK_1-4_6.dbc) instead of downloading individually.
Notebooks 1-4 are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks. Notebook 6 is an Automated ML sample notebook.
For details on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks).
Learn more about [how to use Azure Databricks as a development environment](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment#azure-databricks) for Azure Machine Learning service. Learn more about [how to use Azure Databricks as a development environment](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment#azure-databricks) for Azure Machine Learning service.
You can also use Azure Databricks as a compute target for [training models with an Azure Machine Learning pipeline](https://docs.microsoft.com/machine-learning/service/how-to-set-up-training-targets#databricks). **Databricks as a Compute Target from AML Pipelines**
You can use Azure Databricks as a compute target from [Azure Machine Learning Pipelines](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines). Take a look at this notebook for details: [aml-pipelines-use-databricks-as-compute-target.ipynb](aml-pipelines-use-databricks-as-compute-target.ipynb).
For more on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks).
**Please let us know your feedback.** **Please let us know your feedback.**

View File

@@ -0,0 +1,714 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Using Databricks as a Compute Target from Azure Machine Learning Pipeline\n",
"To use Databricks as a compute target from [Azure Machine Learning Pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines), a [DatabricksStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep?view=azure-ml-py) is used. This notebook demonstrates the use of DatabricksStep in Azure Machine Learning Pipeline.\n",
"\n",
"The notebook will show:\n",
"1. Running an arbitrary Databricks notebook that the customer has in Databricks workspace\n",
"2. Running an arbitrary Python script that the customer has in DBFS\n",
"3. Running an arbitrary Python script that is available on local computer (will upload to DBFS, and then run in Databricks) \n",
"4. Running a JAR job that the customer has in DBFS.\n",
"\n",
"## Before you begin:\n",
"\n",
"1. **Create an Azure Databricks workspace** in the same subscription where you have your Azure Machine Learning workspace. You will need details of this workspace later on to define DatabricksStep. [Click here](https://ms.portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.Databricks%2Fworkspaces) for more information.\n",
"2. **Create PAT (access token)**: Manually create a Databricks access token at the Azure Databricks portal. See [this](https://docs.databricks.com/api/latest/authentication.html#generate-a-token) for more information.\n",
"3. **Add demo notebook to ADB**: This notebook has a sample you can use as is. Launch Azure Databricks attached to your Azure Machine Learning workspace and add a new notebook. \n",
"4. **Create/attach a Blob storage** for use from ADB"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add demo notebook to ADB Workspace\n",
"Copy and paste the below code to create a new notebook in your ADB workspace."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"# direct access\n",
"dbutils.widgets.get(\"myparam\")\n",
"p = getArgument(\"myparam\")\n",
"print (\"Param -\\'myparam':\")\n",
"print (p)\n",
"\n",
"dbutils.widgets.get(\"input\")\n",
"i = getArgument(\"input\")\n",
"print (\"Param -\\'input':\")\n",
"print (i)\n",
"\n",
"dbutils.widgets.get(\"output\")\n",
"o = getArgument(\"output\")\n",
"print (\"Param -\\'output':\")\n",
"print (o)\n",
"\n",
"n = i + \"/testdata.txt\"\n",
"df = spark.read.csv(n)\n",
"\n",
"display (df)\n",
"\n",
"data = [('value1', 'value2')]\n",
"df2 = spark.createDataFrame(data)\n",
"\n",
"z = o + \"/output.txt\"\n",
"df2.write.csv(z)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Azure Machine Learning and Pipeline SDK-specific imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import azureml.core\n",
"from azureml.core.runconfig import JarLibrary\n",
"from azureml.core.compute import ComputeTarget, DatabricksCompute\n",
"from azureml.exceptions import ComputeTargetException\n",
"from azureml.core import Workspace, Experiment\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.steps import DatabricksStep\n",
"from azureml.core.datastore import Datastore\n",
"from azureml.data.data_reference import DataReference\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Attach Databricks compute target\n",
"Next, you need to add your Databricks workspace to Azure Machine Learning as a compute target and give it a name. You will use this name to refer to your Databricks workspace compute target inside Azure Machine Learning.\n",
"\n",
"- **Resource Group** - The resource group name of your Azure Machine Learning workspace\n",
"- **Databricks Workspace Name** - The workspace name of your Azure Databricks workspace\n",
"- **Databricks Access Token** - The access token you created in ADB\n",
"\n",
"**The Databricks workspace need to be present in the same subscription as your AML workspace**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Replace with your account info before running.\n",
" \n",
"db_compute_name=os.getenv(\"DATABRICKS_COMPUTE_NAME\", \"<my-databricks-compute-name>\") # Databricks compute name\n",
"db_resource_group=os.getenv(\"DATABRICKS_RESOURCE_GROUP\", \"<my-db-resource-group>\") # Databricks resource group\n",
"db_workspace_name=os.getenv(\"DATABRICKS_WORKSPACE_NAME\", \"<my-db-workspace-name>\") # Databricks workspace name\n",
"db_access_token=os.getenv(\"DATABRICKS_ACCESS_TOKEN\", \"<my-access-token>\") # Databricks access token\n",
" \n",
"try:\n",
" databricks_compute = DatabricksCompute(workspace=ws, name=db_compute_name)\n",
" print('Compute target {} already exists'.format(db_compute_name))\n",
"except ComputeTargetException:\n",
" print('Compute not found, will use below parameters to attach new one')\n",
" print('db_compute_name {}'.format(db_compute_name))\n",
" print('db_resource_group {}'.format(db_resource_group))\n",
" print('db_workspace_name {}'.format(db_workspace_name))\n",
" print('db_access_token {}'.format(db_access_token))\n",
" \n",
" config = DatabricksCompute.attach_configuration(\n",
" resource_group = db_resource_group,\n",
" workspace_name = db_workspace_name,\n",
" access_token= db_access_token)\n",
" databricks_compute=ComputeTarget.attach(ws, db_compute_name, config)\n",
" databricks_compute.wait_for_completion(True)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Connections with Inputs and Outputs\n",
"The DatabricksStep supports Azure Bloband ADLS for inputs and outputs. You also will need to define a [Secrets](https://docs.azuredatabricks.net/user-guide/secrets/index.html) scope to enable authentication to external data sources such as Blob and ADLS from Databricks.\n",
"\n",
"- Databricks documentation on [Azure Blob](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html)\n",
"- Databricks documentation on [ADLS](https://docs.databricks.com/spark/latest/data-sources/azure/azure-datalake.html)\n",
"\n",
"### Type of Data Access\n",
"Databricks allows to interact with Azure Blob and ADLS in two ways.\n",
"- **Direct Access**: Databricks allows you to interact with Azure Blob or ADLS URIs directly. The input or output URIs will be mapped to a Databricks widget param in the Databricks notebook.\n",
"- **Mounting**: You will be supplied with additional parameters and secrets that will enable you to mount your ADLS or Azure Blob input or output location in your Databricks notebook."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Direct Access: Python sample code\n",
"If you have a data reference named \"input\" it will represent the URI of the input and you can access it directly in the Databricks python notebook like so:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"dbutils.widgets.get(\"input\")\n",
"y = getArgument(\"input\")\n",
"df = spark.read.csv(y)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Mounting: Python sample code for Azure Blob\n",
"Given an Azure Blob data reference named \"input\" the following widget params will be made available in the Databricks notebook:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"# This contains the input URI\n",
"dbutils.widgets.get(\"input\")\n",
"myinput_uri = getArgument(\"input\")\n",
"\n",
"# How to get the input datastore name inside ADB notebook\n",
"# This contains the name of a Databricks secret (in the predefined \"amlscope\" secret scope) \n",
"# that contians an access key or sas for the Azure Blob input (this name is obtained by appending \n",
"# the name of the input with \"_blob_secretname\". \n",
"dbutils.widgets.get(\"input_blob_secretname\") \n",
"myinput_blob_secretname = getArgument(\"input_blob_secretname\")\n",
"\n",
"# This contains the required configuration for mounting\n",
"dbutils.widgets.get(\"input_blob_config\")\n",
"myinput_blob_config = getArgument(\"input_blob_config\")\n",
"\n",
"# Usage\n",
"dbutils.fs.mount(\n",
" source = myinput_uri,\n",
" mount_point = \"/mnt/input\",\n",
" extra_configs = {myinput_blob_config:dbutils.secrets.get(scope = \"amlscope\", key = myinput_blob_secretname)})\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Mounting: Python sample code for ADLS\n",
"Given an ADLS data reference named \"input\" the following widget params will be made available in the Databricks notebook:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"# This contains the input URI\n",
"dbutils.widgets.get(\"input\") \n",
"myinput_uri = getArgument(\"input\")\n",
"\n",
"# This contains the client id for the service principal \n",
"# that has access to the adls input\n",
"dbutils.widgets.get(\"input_adls_clientid\") \n",
"myinput_adls_clientid = getArgument(\"input_adls_clientid\")\n",
"\n",
"# This contains the name of a Databricks secret (in the predefined \"amlscope\" secret scope) \n",
"# that contains the secret for the above mentioned service principal\n",
"dbutils.widgets.get(\"input_adls_secretname\") \n",
"myinput_adls_secretname = getArgument(\"input_adls_secretname\")\n",
"\n",
"# This contains the refresh url for the mounting configs\n",
"dbutils.widgets.get(\"input_adls_refresh_url\") \n",
"myinput_adls_refresh_url = getArgument(\"input_adls_refresh_url\")\n",
"\n",
"# Usage \n",
"configs = {\"dfs.adls.oauth2.access.token.provider.type\": \"ClientCredential\",\n",
" \"dfs.adls.oauth2.client.id\": myinput_adls_clientid,\n",
" \"dfs.adls.oauth2.credential\": dbutils.secrets.get(scope = \"amlscope\", key =myinput_adls_secretname),\n",
" \"dfs.adls.oauth2.refresh.url\": myinput_adls_refresh_url}\n",
"\n",
"dbutils.fs.mount(\n",
" source = myinput_uri,\n",
" mount_point = \"/mnt/output\",\n",
" extra_configs = configs)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use Databricks from Azure Machine Learning Pipeline\n",
"To use Databricks as a compute target from Azure Machine Learning Pipeline, a DatabricksStep is used. Let's define a datasource (via DataReference) and intermediate data (via PipelineData) to be used in DatabricksStep."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use the default blob storage\n",
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
"print('Datastore {} will be used'.format(def_blob_store.name))\n",
"\n",
"# We are uploading a sample file in the local directory to be used as a datasource\n",
"def_blob_store.upload_files(files=[\"./testdata.txt\"], target_path=\"dbtest\", overwrite=False)\n",
"\n",
"step_1_input = DataReference(datastore=def_blob_store, path_on_datastore=\"dbtest\",\n",
" data_reference_name=\"input\")\n",
"\n",
"step_1_output = PipelineData(\"output\", datastore=def_blob_store)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add a DatabricksStep\n",
"Adds a Databricks notebook as a step in a Pipeline.\n",
"- ***name:** Name of the Module\n",
"- **inputs:** List of input connections for data consumed by this step. Fetch this inside the notebook using dbutils.widgets.get(\"input\")\n",
"- **outputs:** List of output port definitions for outputs produced by this step. Fetch this inside the notebook using dbutils.widgets.get(\"output\")\n",
"- **existing_cluster_id:** Cluster ID of an existing Interactive cluster on the Databricks workspace. If you are providing this, do not provide any of the parameters below that are used to create a new cluster such as spark_version, node_type, etc.\n",
"- **spark_version:** Version of spark for the databricks run cluster. default value: 4.0.x-scala2.11\n",
"- **node_type:** Azure vm node types for the databricks run cluster. default value: Standard_D3_v2\n",
"- **num_workers:** Specifies a static number of workers for the databricks run cluster\n",
"- **min_workers:** Specifies a min number of workers to use for auto-scaling the databricks run cluster\n",
"- **max_workers:** Specifies a max number of workers to use for auto-scaling the databricks run cluster\n",
"- **spark_env_variables:** Spark environment variables for the databricks run cluster (dictionary of {str:str}). default value: {'PYSPARK_PYTHON': '/databricks/python3/bin/python3'}\n",
"- **notebook_path:** Path to the notebook in the databricks instance. If you are providing this, do not provide python script related paramaters or JAR related parameters.\n",
"- **notebook_params:** Parameters for the databricks notebook (dictionary of {str:str}). Fetch this inside the notebook using dbutils.widgets.get(\"myparam\")\n",
"- **python_script_path:** The path to the python script in the DBFS or S3. If you are providing this, do not provide python_script_name which is used for uploading script from local machine.\n",
"- **python_script_params:** Parameters for the python script (list of str)\n",
"- **main_class_name:** The name of the entry point in a JAR module. If you are providing this, do not provide any python script or notebook related parameters.\n",
"- **jar_params:** Parameters for the JAR module (list of str)\n",
"- **python_script_name:** name of a python script on your local machine (relative to source_directory). If you are providing this do not provide python_script_path which is used to execute a remote python script; or any of the JAR or notebook related parameters.\n",
"- **source_directory:** folder that contains the script and other files\n",
"- **hash_paths:** list of paths to hash to detect a change in source_directory (script file is always hashed)\n",
"- **run_name:** Name in databricks for this run\n",
"- **timeout_seconds:** Timeout for the databricks run\n",
"- **runconfig:** Runconfig to use. Either pass runconfig or each library type as a separate parameter but do not mix the two\n",
"- **maven_libraries:** maven libraries for the databricks run\n",
"- **pypi_libraries:** pypi libraries for the databricks run\n",
"- **egg_libraries:** egg libraries for the databricks run\n",
"- **jar_libraries:** jar libraries for the databricks run\n",
"- **rcran_libraries:** rcran libraries for the databricks run\n",
"- **compute_target:** Azure Databricks compute\n",
"- **allow_reuse:** Whether the step should reuse previous results when run with the same settings/inputs\n",
"- **version:** Optional version tag to denote a change in functionality for the step\n",
"\n",
"\\* *denotes required fields* \n",
"*You must provide exactly one of num_workers or min_workers and max_workers paramaters* \n",
"*You must provide exactly one of databricks_compute or databricks_compute_name parameters*\n",
"\n",
"## Use runconfig to specify library dependencies\n",
"You can use a runconfig to specify the library dependencies for your cluster in Databricks. The runconfig will contain a databricks section as follows:\n",
"\n",
"```yaml\n",
"environment:\n",
"# Databricks details\n",
" databricks:\n",
"# List of maven libraries.\n",
" mavenLibraries:\n",
" - coordinates: org.jsoup:jsoup:1.7.1\n",
" repo: ''\n",
" exclusions:\n",
" - slf4j:slf4j\n",
" - '*:hadoop-client'\n",
"# List of PyPi libraries\n",
" pypiLibraries:\n",
" - package: beautifulsoup4\n",
" repo: ''\n",
"# List of RCran libraries\n",
" rcranLibraries:\n",
" -\n",
"# Coordinates.\n",
" package: ada\n",
"# Repo\n",
" repo: http://cran.us.r-project.org\n",
"# List of JAR libraries\n",
" jarLibraries:\n",
" -\n",
"# Coordinates.\n",
" library: dbfs:/mnt/libraries/library.jar\n",
"# List of Egg libraries\n",
" eggLibraries:\n",
" -\n",
"# Coordinates.\n",
" library: dbfs:/mnt/libraries/library.egg\n",
"```\n",
"\n",
"You can then create a RunConfiguration object using this file and pass it as the runconfig parameter to DatabricksStep.\n",
"```python\n",
"from azureml.core.runconfig import RunConfiguration\n",
"\n",
"runconfig = RunConfiguration()\n",
"runconfig.load(path='<directory_where_runconfig_is_stored>', name='<runconfig_file_name>')\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Running the demo notebook already added to the Databricks workspace\n",
"Create a notebook in the Azure Databricks workspace, and provide the path to that notebook as the value associated with the environment variable \"DATABRICKS_NOTEBOOK_PATH\". This will then set the variable notebook_path when you run the code cell below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"notebook_path=os.getenv(\"DATABRICKS_NOTEBOOK_PATH\", \"<my-databricks-notebook-path>\") # Databricks notebook path\n",
"\n",
"dbNbStep = DatabricksStep(\n",
" name=\"DBNotebookInWS\",\n",
" inputs=[step_1_input],\n",
" outputs=[step_1_output],\n",
" num_workers=1,\n",
" notebook_path=notebook_path,\n",
" notebook_params={'myparam': 'testparam'},\n",
" run_name='DB_Notebook_demo',\n",
" compute_target=databricks_compute,\n",
" allow_reuse=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build and submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#steps = [dbNbStep]\n",
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
"#pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline)\n",
"#pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#from azureml.widgets import RunDetails\n",
"#RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Running a Python script from DBFS\n",
"This shows how to run a Python script in DBFS. \n",
"\n",
"To complete this, you will need to first upload the Python script in your local machine to DBFS using the [CLI](https://docs.azuredatabricks.net/user-guide/dbfs-databricks-file-system.html). The CLI command is given below:\n",
"\n",
"```\n",
"dbfs cp ./train-db-dbfs.py dbfs:/train-db-dbfs.py\n",
"```\n",
"\n",
"The code in the below cell assumes that you have completed the previous step of uploading the script `train-db-dbfs.py` to the root folder in DBFS."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"python_script_path = os.getenv(\"DATABRICKS_PYTHON_SCRIPT_PATH\", \"<my-databricks-python-script-path>\") # Databricks python script path\n",
"\n",
"dbPythonInDbfsStep = DatabricksStep(\n",
" name=\"DBPythonInDBFS\",\n",
" inputs=[step_1_input],\n",
" num_workers=1,\n",
" python_script_path=python_script_path,\n",
" python_script_params={'--input_data'},\n",
" run_name='DB_Python_demo',\n",
" compute_target=databricks_compute,\n",
" allow_reuse=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build and submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#steps = [dbPythonInDbfsStep]\n",
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
"#pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline)\n",
"#pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#from azureml.widgets import RunDetails\n",
"#RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Running a Python script in Databricks that currenlty is in local computer\n",
"To run a Python script that is currently in your local computer, follow the instructions below. \n",
"\n",
"The commented out code below code assumes that you have `train-db-local.py` in the `scripts` subdirectory under the current working directory.\n",
"\n",
"In this case, the Python script will be uploaded first to DBFS, and then the script will be run in Databricks."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"python_script_name = \"train-db-local.py\"\n",
"source_directory = \".\"\n",
"\n",
"dbPythonInLocalMachineStep = DatabricksStep(\n",
" name=\"DBPythonInLocalMachine\",\n",
" inputs=[step_1_input],\n",
" num_workers=1,\n",
" python_script_name=python_script_name,\n",
" source_directory=source_directory,\n",
" run_name='DB_Python_Local_demo',\n",
" compute_target=databricks_compute,\n",
" allow_reuse=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build and submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"steps = [dbPythonInLocalMachineStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_Python_Local_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4. Running a JAR job that is alreay added in DBFS\n",
"To run a JAR job that is already uploaded to DBFS, follow the instructions below. You will first upload the JAR file to DBFS using the [CLI](https://docs.azuredatabricks.net/user-guide/dbfs-databricks-file-system.html).\n",
"\n",
"The commented out code in the below cell assumes that you have uploaded `train-db-dbfs.jar` to the root folder in DBFS. You can upload `train-db-dbfs.jar` to the root folder in DBFS using this commandline so you can use `jar_library_dbfs_path = \"dbfs:/train-db-dbfs.jar\"`:\n",
"\n",
"```\n",
"dbfs cp ./train-db-dbfs.jar dbfs:/train-db-dbfs.jar\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"main_jar_class_name = \"com.microsoft.aeva.Main\"\n",
"jar_library_dbfs_path = os.getenv(\"DATABRICKS_JAR_LIB_PATH\", \"<my-databricks-jar-lib-path>\") # Databricks jar library path\n",
"\n",
"dbJarInDbfsStep = DatabricksStep(\n",
" name=\"DBJarInDBFS\",\n",
" inputs=[step_1_input],\n",
" num_workers=1,\n",
" main_class_name=main_jar_class_name,\n",
" jar_params={'arg1', 'arg2'},\n",
" run_name='DB_JAR_demo',\n",
" jar_libraries=[JarLibrary(jar_library_dbfs_path)],\n",
" compute_target=databricks_compute,\n",
" allow_reuse=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build and submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#steps = [dbJarInDbfsStep]\n",
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
"#pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline)\n",
"#pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#from azureml.widgets import RunDetails\n",
"#RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next: ADLA as a Compute Target\n",
"To use ADLA as a compute target from Azure Machine Learning Pipeline, a AdlaStep is used. This [notebook](./aml-pipelines-use-adla-as-compute-target.ipynb) demonstrates the use of AdlaStep in Azure Machine Learning Pipeline."
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,396 +1,366 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Azure ML & Azure Databricks notebooks by Parashar Shah.\n", "Azure ML & Azure Databricks notebooks by Parashar Shah.\n",
"\n", "\n",
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"![04ACI](files/tables/image2.JPG)" "#Model Building"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "code",
"metadata": {}, "execution_count": null,
"source": [ "metadata": {},
"#Model Building" "outputs": [],
] "source": [
}, "import os\n",
{ "import pprint\n",
"cell_type": "code", "import numpy as np\n",
"execution_count": null, "\n",
"metadata": {}, "from pyspark.ml import Pipeline, PipelineModel\n",
"outputs": [], "from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler\n",
"source": [ "from pyspark.ml.classification import LogisticRegression\n",
"import os\n", "from pyspark.ml.evaluation import BinaryClassificationEvaluator\n",
"import pprint\n", "from pyspark.ml.tuning import CrossValidator, ParamGridBuilder"
"import numpy as np\n", ]
"\n", },
"from pyspark.ml import Pipeline, PipelineModel\n", {
"from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler\n", "cell_type": "code",
"from pyspark.ml.classification import LogisticRegression\n", "execution_count": null,
"from pyspark.ml.evaluation import BinaryClassificationEvaluator\n", "metadata": {},
"from pyspark.ml.tuning import CrossValidator, ParamGridBuilder" "outputs": [],
] "source": [
}, "import azureml.core\n",
{ "\n",
"cell_type": "code", "# Check core SDK version number\n",
"execution_count": null, "print(\"SDK version:\", azureml.core.VERSION)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"import azureml.core\n", "cell_type": "code",
"\n", "execution_count": null,
"# Check core SDK version number\n", "metadata": {},
"print(\"SDK version:\", azureml.core.VERSION)" "outputs": [],
] "source": [
}, "# Set auth to be used by workspace related APIs.\n",
{ "# For automation or CI/CD ServicePrincipalAuthentication can be used.\n",
"cell_type": "code", "# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py\n",
"execution_count": null, "auth = None"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"##TESTONLY\n", "cell_type": "code",
"# import auth creds from notebook parameters\n", "execution_count": null,
"tenant = dbutils.widgets.get('tenant_id')\n", "metadata": {},
"username = dbutils.widgets.get('service_principal_id')\n", "outputs": [],
"password = dbutils.widgets.get('service_principal_password')\n", "source": [
"\n", "# import the Workspace class and check the azureml SDK version\n",
"auth = azureml.core.authentication.ServicePrincipalAuthentication(tenant, username, password)" "from azureml.core import Workspace\n",
] "\n",
}, "ws = Workspace.from_config(auth = auth)\n",
{ "print('Workspace name: ' + ws.name, \n",
"cell_type": "code", " 'Azure region: ' + ws.location, \n",
"execution_count": null, " 'Subscription id: ' + ws.subscription_id, \n",
"metadata": {}, " 'Resource group: ' + ws.resource_group, sep = '\\n')"
"outputs": [], ]
"source": [ },
"# import the Workspace class and check the azureml SDK version\n", {
"from azureml.core import Workspace\n", "cell_type": "code",
"\n", "execution_count": null,
"ws = Workspace.from_config(auth = auth)\n", "metadata": {},
"print('Workspace name: ' + ws.name, \n", "outputs": [],
" 'Azure region: ' + ws.location, \n", "source": [
" 'Subscription id: ' + ws.subscription_id, \n", "#get the train and test datasets\n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')" "train_data_path = \"AdultCensusIncomeTrain\"\n",
] "test_data_path = \"AdultCensusIncomeTest\"\n",
}, "\n",
{ "train = spark.read.parquet(train_data_path)\n",
"cell_type": "code", "test = spark.read.parquet(test_data_path)\n",
"execution_count": null, "\n",
"metadata": {}, "print(\"train: ({}, {})\".format(train.count(), len(train.columns)))\n",
"outputs": [], "print(\"test: ({}, {})\".format(test.count(), len(test.columns)))\n",
"source": [ "\n",
"##PUBLISHONLY\n", "train.printSchema()"
"## import the Workspace class and check the azureml SDK version\n", ]
"#from azureml.core import Workspace\n", },
"#\n", {
"#ws = Workspace.from_config()\n", "cell_type": "markdown",
"#print('Workspace name: ' + ws.name, \n", "metadata": {},
"# 'Azure region: ' + ws.location, \n", "source": [
"# 'Subscription id: ' + ws.subscription_id, \n", "#Define Model"
"# 'Resource group: ' + ws.resource_group, sep = '\\n')" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "label = \"income\"\n",
"#get the train and test datasets\n", "dtypes = dict(train.dtypes)\n",
"train_data_path = \"AdultCensusIncomeTrain\"\n", "dtypes.pop(label)\n",
"test_data_path = \"AdultCensusIncomeTest\"\n", "\n",
"\n", "si_xvars = []\n",
"train = spark.read.parquet(train_data_path)\n", "ohe_xvars = []\n",
"test = spark.read.parquet(test_data_path)\n", "featureCols = []\n",
"\n", "for idx,key in enumerate(dtypes):\n",
"print(\"train: ({}, {})\".format(train.count(), len(train.columns)))\n", " if dtypes[key] == \"string\":\n",
"print(\"test: ({}, {})\".format(test.count(), len(test.columns)))\n", " featureCol = \"-\".join([key, \"encoded\"])\n",
"\n", " featureCols.append(featureCol)\n",
"train.printSchema()" " \n",
] " tmpCol = \"-\".join([key, \"tmp\"])\n",
}, " # string-index and one-hot encode the string column\n",
{ " #https://spark.apache.org/docs/2.3.0/api/java/org/apache/spark/ml/feature/StringIndexer.html\n",
"cell_type": "markdown", " #handleInvalid: Param for how to handle invalid data (unseen labels or NULL values). \n",
"metadata": {}, " #Options are 'skip' (filter out rows with invalid data), 'error' (throw an error), \n",
"source": [ " #or 'keep' (put invalid data in a special additional bucket, at index numLabels). Default: \"error\"\n",
"#Define Model" " si_xvars.append(StringIndexer(inputCol=key, outputCol=tmpCol, handleInvalid=\"skip\"))\n",
] " ohe_xvars.append(OneHotEncoder(inputCol=tmpCol, outputCol=featureCol))\n",
}, " else:\n",
{ " featureCols.append(key)\n",
"cell_type": "code", "\n",
"execution_count": null, "# string-index the label column into a column named \"label\"\n",
"metadata": {}, "si_label = StringIndexer(inputCol=label, outputCol='label')\n",
"outputs": [], "\n",
"source": [ "# assemble the encoded feature columns in to a column named \"features\"\n",
"label = \"income\"\n", "assembler = VectorAssembler(inputCols=featureCols, outputCol=\"features\")"
"dtypes = dict(train.dtypes)\n", ]
"dtypes.pop(label)\n", },
"\n", {
"si_xvars = []\n", "cell_type": "code",
"ohe_xvars = []\n", "execution_count": null,
"featureCols = []\n", "metadata": {},
"for idx,key in enumerate(dtypes):\n", "outputs": [],
" if dtypes[key] == \"string\":\n", "source": [
" featureCol = \"-\".join([key, \"encoded\"])\n", "from azureml.core.run import Run\n",
" featureCols.append(featureCol)\n", "from azureml.core.experiment import Experiment\n",
" \n", "import numpy as np\n",
" tmpCol = \"-\".join([key, \"tmp\"])\n", "import os\n",
" # string-index and one-hot encode the string column\n", "import shutil\n",
" #https://spark.apache.org/docs/2.3.0/api/java/org/apache/spark/ml/feature/StringIndexer.html\n", "\n",
" #handleInvalid: Param for how to handle invalid data (unseen labels or NULL values). \n", "model_name = \"AdultCensus_runHistory.mml\"\n",
" #Options are 'skip' (filter out rows with invalid data), 'error' (throw an error), \n", "model_dbfs = os.path.join(\"/dbfs\", model_name)\n",
" #or 'keep' (put invalid data in a special additional bucket, at index numLabels). Default: \"error\"\n", "run_history_name = 'spark-ml-notebook'\n",
" si_xvars.append(StringIndexer(inputCol=key, outputCol=tmpCol, handleInvalid=\"skip\"))\n", "\n",
" ohe_xvars.append(OneHotEncoder(inputCol=tmpCol, outputCol=featureCol))\n", "# start a training run by defining an experiment\n",
" else:\n", "myexperiment = Experiment(ws, \"Ignite_AI_Talk\")\n",
" featureCols.append(key)\n", "root_run = myexperiment.start_logging()\n",
"\n", "\n",
"# string-index the label column into a column named \"label\"\n", "# Regularization Rates - \n",
"si_label = StringIndexer(inputCol=label, outputCol='label')\n", "regs = [0.0001, 0.001, 0.01, 0.1]\n",
"\n", " \n",
"# assemble the encoded feature columns in to a column named \"features\"\n", "# try a bunch of regularization rate in a Logistic Regression model\n",
"assembler = VectorAssembler(inputCols=featureCols, outputCol=\"features\")" "for reg in regs:\n",
] " print(\"Regularization rate: {}\".format(reg))\n",
}, " # create a bunch of child runs\n",
{ " with root_run.child_run(\"reg-\" + str(reg)) as run:\n",
"cell_type": "code", " # create a new Logistic Regression model.\n",
"execution_count": null, " lr = LogisticRegression(regParam=reg)\n",
"metadata": {}, " \n",
"outputs": [], " # put together the pipeline\n",
"source": [ " pipe = Pipeline(stages=[*si_xvars, *ohe_xvars, si_label, assembler, lr])\n",
"from azureml.core.run import Run\n", "\n",
"from azureml.core.experiment import Experiment\n", " # train the model\n",
"import numpy as np\n", " model_p = pipe.fit(train)\n",
"import os\n", " \n",
"import shutil\n", " # make prediction\n",
"\n", " pred = model_p.transform(test)\n",
"model_name = \"AdultCensus_runHistory.mml\"\n", " \n",
"model_dbfs = os.path.join(\"/dbfs\", model_name)\n", " # evaluate. note only 2 metrics are supported out of the box by Spark ML.\n",
"run_history_name = 'spark-ml-notebook'\n", " bce = BinaryClassificationEvaluator(rawPredictionCol='rawPrediction')\n",
"\n", " au_roc = bce.setMetricName('areaUnderROC').evaluate(pred)\n",
"# start a training run by defining an experiment\n", " au_prc = bce.setMetricName('areaUnderPR').evaluate(pred)\n",
"myexperiment = Experiment(ws, \"Ignite_AI_Talk\")\n", "\n",
"root_run = myexperiment.start_logging()\n", " print(\"Area under ROC: {}\".format(au_roc))\n",
"\n", " print(\"Area Under PR: {}\".format(au_prc))\n",
"# Regularization Rates - \n", " \n",
"regs = [0.0001, 0.001, 0.01, 0.1]\n", " # log reg, au_roc, au_prc and feature names in run history\n",
" \n", " run.log(\"reg\", reg)\n",
"# try a bunch of regularization rate in a Logistic Regression model\n", " run.log(\"au_roc\", au_roc)\n",
"for reg in regs:\n", " run.log(\"au_prc\", au_prc)\n",
" print(\"Regularization rate: {}\".format(reg))\n", " run.log_list(\"columns\", train.columns)\n",
" # create a bunch of child runs\n", "\n",
" with root_run.child_run(\"reg-\" + str(reg)) as run:\n", " # save model\n",
" # create a new Logistic Regression model.\n", " model_p.write().overwrite().save(model_name)\n",
" lr = LogisticRegression(regParam=reg)\n", " \n",
" \n", " # upload the serialized model into run history record\n",
" # put together the pipeline\n", " mdl, ext = model_name.split(\".\")\n",
" pipe = Pipeline(stages=[*si_xvars, *ohe_xvars, si_label, assembler, lr])\n", " model_zip = mdl + \".zip\"\n",
"\n", " shutil.make_archive(mdl, 'zip', model_dbfs)\n",
" # train the model\n", " run.upload_file(\"outputs/\" + model_name, model_zip) \n",
" model_p = pipe.fit(train)\n", " #run.upload_file(\"outputs/\" + model_name, path_or_stream = model_dbfs) #cannot deal with folders\n",
" \n", "\n",
" # make prediction\n", " # now delete the serialized model from local folder since it is already uploaded to run history \n",
" pred = model_p.transform(test)\n", " shutil.rmtree(model_dbfs)\n",
" \n", " os.remove(model_zip)\n",
" # evaluate. note only 2 metrics are supported out of the box by Spark ML.\n", " \n",
" bce = BinaryClassificationEvaluator(rawPredictionCol='rawPrediction')\n", "# Declare run completed\n",
" au_roc = bce.setMetricName('areaUnderROC').evaluate(pred)\n", "root_run.complete()\n",
" au_prc = bce.setMetricName('areaUnderPR').evaluate(pred)\n", "root_run_id = root_run.id\n",
"\n", "print (\"run id:\", root_run.id)"
" print(\"Area under ROC: {}\".format(au_roc))\n", ]
" print(\"Area Under PR: {}\".format(au_prc))\n", },
" \n", {
" # log reg, au_roc, au_prc and feature names in run history\n", "cell_type": "code",
" run.log(\"reg\", reg)\n", "execution_count": null,
" run.log(\"au_roc\", au_roc)\n", "metadata": {},
" run.log(\"au_prc\", au_prc)\n", "outputs": [],
" run.log_list(\"columns\", train.columns)\n", "source": [
"\n", "metrics = root_run.get_metrics(recursive=True)\n",
" # save model\n", "best_run_id = max(metrics, key = lambda k: metrics[k]['au_roc'])\n",
" model_p.write().overwrite().save(model_name)\n", "print(best_run_id, metrics[best_run_id]['au_roc'], metrics[best_run_id]['reg'])"
" \n", ]
" # upload the serialized model into run history record\n", },
" mdl, ext = model_name.split(\".\")\n", {
" model_zip = mdl + \".zip\"\n", "cell_type": "code",
" shutil.make_archive(mdl, 'zip', model_dbfs)\n", "execution_count": null,
" run.upload_file(\"outputs/\" + model_name, model_zip) \n", "metadata": {},
" #run.upload_file(\"outputs/\" + model_name, path_or_stream = model_dbfs) #cannot deal with folders\n", "outputs": [],
"\n", "source": [
" # now delete the serialized model from local folder since it is already uploaded to run history \n", "#Get the best run\n",
" shutil.rmtree(model_dbfs)\n", "child_runs = {}\n",
" os.remove(model_zip)\n", "\n",
" \n", "for r in root_run.get_children():\n",
"# Declare run completed\n", " child_runs[r.id] = r\n",
"root_run.complete()\n", " \n",
"root_run_id = root_run.id\n", "best_run = child_runs[best_run_id]"
"print (\"run id:\", root_run.id)" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "#Download the model from the best run to a local folder\n",
"metrics = root_run.get_metrics(recursive=True)\n", "best_model_file_name = \"best_model.zip\"\n",
"best_run_id = max(metrics, key = lambda k: metrics[k]['au_roc'])\n", "best_run.download_file(name = 'outputs/' + model_name, output_file_path = best_model_file_name)"
"print(best_run_id, metrics[best_run_id]['au_roc'], metrics[best_run_id]['reg'])" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "code", "metadata": {},
"execution_count": null, "source": [
"metadata": {}, "#Model Evaluation"
"outputs": [], ]
"source": [ },
"#Get the best run\n", {
"child_runs = {}\n", "cell_type": "code",
"\n", "execution_count": null,
"for r in root_run.get_children():\n", "metadata": {},
" child_runs[r.id] = r\n", "outputs": [],
" \n", "source": [
"best_run = child_runs[best_run_id]" "##unzip the model to dbfs (as load() seems to require that) and load it.\n",
] "if os.path.isfile(model_dbfs) or os.path.isdir(model_dbfs):\n",
}, " shutil.rmtree(model_dbfs)\n",
{ "shutil.unpack_archive(best_model_file_name, model_dbfs)\n",
"cell_type": "code", "\n",
"execution_count": null, "model_p_best = PipelineModel.load(model_name)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"#Download the model from the best run to a local folder\n", "cell_type": "code",
"best_model_file_name = \"best_model.zip\"\n", "execution_count": null,
"best_run.download_file(name = 'outputs/' + model_name, output_file_path = best_model_file_name)" "metadata": {},
] "outputs": [],
}, "source": [
{ "# make prediction\n",
"cell_type": "markdown", "pred = model_p_best.transform(test)\n",
"metadata": {}, "output = pred[['hours_per_week','age','workclass','marital_status','income','prediction']]\n",
"source": [ "display(output.limit(5))"
"#Model Evaluation" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "# evaluate. note only 2 metrics are supported out of the box by Spark ML.\n",
"##unzip the model to dbfs (as load() seems to require that) and load it.\n", "bce = BinaryClassificationEvaluator(rawPredictionCol='rawPrediction')\n",
"if os.path.isfile(model_dbfs) or os.path.isdir(model_dbfs):\n", "au_roc = bce.setMetricName('areaUnderROC').evaluate(pred)\n",
" shutil.rmtree(model_dbfs)\n", "au_prc = bce.setMetricName('areaUnderPR').evaluate(pred)\n",
"shutil.unpack_archive(best_model_file_name, model_dbfs)\n", "\n",
"\n", "print(\"Area under ROC: {}\".format(au_roc))\n",
"model_p_best = PipelineModel.load(model_name)" "print(\"Area Under PR: {}\".format(au_prc))"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": null, "metadata": {},
"metadata": {}, "source": [
"outputs": [], "#Model Persistence"
"source": [ ]
"# make prediction\n", },
"pred = model_p_best.transform(test)\n", {
"output = pred[['hours_per_week','age','workclass','marital_status','income','prediction']]\n", "cell_type": "code",
"display(output.limit(5))" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "##NOTE: by default the model is saved to and loaded from /dbfs/ instead of cwd!\n",
"execution_count": null, "model_p_best.write().overwrite().save(model_name)\n",
"metadata": {}, "print(\"saved model to {}\".format(model_dbfs))"
"outputs": [], ]
"source": [ },
"# evaluate. note only 2 metrics are supported out of the box by Spark ML.\n", {
"bce = BinaryClassificationEvaluator(rawPredictionCol='rawPrediction')\n", "cell_type": "code",
"au_roc = bce.setMetricName('areaUnderROC').evaluate(pred)\n", "execution_count": null,
"au_prc = bce.setMetricName('areaUnderPR').evaluate(pred)\n", "metadata": {},
"\n", "outputs": [],
"print(\"Area under ROC: {}\".format(au_roc))\n", "source": [
"print(\"Area Under PR: {}\".format(au_prc))" "%sh\n",
] "\n",
}, "ls -la /dbfs/AdultCensus_runHistory.mml/*"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "code",
"#Model Persistence" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "dbutils.notebook.exit(\"success\")"
"execution_count": null, ]
"metadata": {}, }
"outputs": [],
"source": [
"##NOTE: by default the model is saved to and loaded from /dbfs/ instead of cwd!\n",
"model_p_best.write().overwrite().save(model_name)\n",
"print(\"saved model to {}\".format(model_dbfs))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%sh\n",
"\n",
"ls -la /dbfs/AdultCensus_runHistory.mml/*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dbutils.notebook.exit(\"success\")"
]
}
],
"metadata": {
"authors": [
{
"name": "pasha"
},
{
"name": "wamartin"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3", "authors": [
"language": "python", {
"name": "python3" "name": "pasha"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"name": "build-model-run-history-03",
"notebookId": 3836944406456339
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 1
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"name": "03.Build_model_runHistory",
"notebookId": 3836944406456339
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -1,354 +1,310 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Azure ML & Azure Databricks notebooks by Parashar Shah.\n", "Azure ML & Azure Databricks notebooks by Parashar Shah.\n",
"\n", "\n",
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Please ensure you have run all previous notebooks in sequence before running this.\n", "Please ensure you have run all previous notebooks in sequence before running this.\n",
"\n", "\n",
"Please Register Azure Container Instance(ACI) using Azure Portal: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services#portal in your subscription before using the SDK to deploy your ML model to ACI." "Please Register Azure Container Instance(ACI) using Azure Portal: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services#portal in your subscription before using the SDK to deploy your ML model to ACI."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "code",
"metadata": {}, "execution_count": null,
"source": [ "metadata": {},
"![04ACI](files/tables/image3.JPG)" "outputs": [],
] "source": [
}, "import azureml.core\n",
{ "\n",
"cell_type": "code", "# Check core SDK version number\n",
"execution_count": null, "print(\"SDK version:\", azureml.core.VERSION)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"import azureml.core\n", "cell_type": "code",
"\n", "execution_count": null,
"# Check core SDK version number\n", "metadata": {},
"print(\"SDK version:\", azureml.core.VERSION)" "outputs": [],
] "source": [
}, "# Set auth to be used by workspace related APIs.\n",
{ "# For automation or CI/CD ServicePrincipalAuthentication can be used.\n",
"cell_type": "code", "# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py\n",
"execution_count": null, "auth = None"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"##TESTONLY\n", "cell_type": "code",
"# import auth creds from notebook parameters\n", "execution_count": null,
"tenant = dbutils.widgets.get('tenant_id')\n", "metadata": {},
"username = dbutils.widgets.get('service_principal_id')\n", "outputs": [],
"password = dbutils.widgets.get('service_principal_password')\n", "source": [
"\n", "from azureml.core import Workspace\n",
"auth = azureml.core.authentication.ServicePrincipalAuthentication(tenant, username, password)" "\n",
] "ws = Workspace.from_config(auth = auth)\n",
}, "print('Workspace name: ' + ws.name, \n",
{ " 'Azure region: ' + ws.location, \n",
"cell_type": "code", " 'Subscription id: ' + ws.subscription_id, \n",
"execution_count": null, " 'Resource group: ' + ws.resource_group, sep = '\\n')"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"from azureml.core import Workspace\n", "cell_type": "code",
"\n", "execution_count": null,
"#'''\n", "metadata": {},
"ws = Workspace.from_config(auth = auth)\n", "outputs": [],
"print('Workspace name: ' + ws.name, \n", "source": [
" 'Azure region: ' + ws.location, \n", "##NOTE: service deployment always gets the model from the current working dir.\n",
" 'Subscription id: ' + ws.subscription_id, \n", "import os\n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n", "\n",
"#'''" "model_name = \"AdultCensus_runHistory.mml\" # \n",
] "model_name_dbfs = os.path.join(\"/dbfs\", model_name)\n",
}, "\n",
{ "print(\"copy model from dbfs to local\")\n",
"cell_type": "code", "model_local = \"file:\" + os.getcwd() + \"/\" + model_name\n",
"execution_count": null, "dbutils.fs.cp(model_name, model_local, True)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"##PUBLISHONLY\n", "cell_type": "code",
"#from azureml.core import Workspace\n", "execution_count": null,
"#import azureml.core\n", "metadata": {},
"#\n", "outputs": [],
"## Check core SDK version number\n", "source": [
"#print(\"SDK version:\", azureml.core.VERSION)\n", "#Register the model\n",
"#\n", "from azureml.core.model import Model\n",
"##'''\n", "mymodel = Model.register(model_path = model_name, # this points to a local file\n",
"#ws = Workspace.from_config()\n", " model_name = model_name, # this is the name the model is registered as, am using same name for both path and name. \n",
"#print('Workspace name: ' + ws.name, \n", " description = \"ADB trained model by Parashar\",\n",
"# 'Azure region: ' + ws.location, \n", " workspace = ws)\n",
"# 'Subscription id: ' + ws.subscription_id, \n", "\n",
"# 'Resource group: ' + ws.resource_group, sep = '\\n')\n", "print(mymodel.name, mymodel.description, mymodel.version)"
"##'''" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "#%%writefile score_sparkml.py\n",
"##NOTE: service deployment always gets the model from the current working dir.\n", "score_sparkml = \"\"\"\n",
"import os\n", " \n",
"\n", "import json\n",
"model_name = \"AdultCensus_runHistory.mml\" # \n", " \n",
"model_name_dbfs = os.path.join(\"/dbfs\", model_name)\n", "def init():\n",
"\n", " # One-time initialization of PySpark and predictive model\n",
"print(\"copy model from dbfs to local\")\n", " import pyspark\n",
"model_local = \"file:\" + os.getcwd() + \"/\" + model_name\n", " from azureml.core.model import Model\n",
"dbutils.fs.cp(model_name, model_local, True)" " from pyspark.ml import PipelineModel\n",
] " \n",
}, " global trainedModel\n",
{ " global spark\n",
"cell_type": "code", " \n",
"execution_count": null, " spark = pyspark.sql.SparkSession.builder.appName(\"ADB and AML notebook by Parashar\").getOrCreate()\n",
"metadata": {}, " model_name = \"{model_name}\" #interpolated\n",
"outputs": [], " model_path = Model.get_model_path(model_name)\n",
"source": [ " trainedModel = PipelineModel.load(model_path)\n",
"#Register the model\n", " \n",
"from azureml.core.model import Model\n", "def run(input_json):\n",
"mymodel = Model.register(model_path = model_name, # this points to a local file\n", " if isinstance(trainedModel, Exception):\n",
" model_name = model_name, # this is the name the model is registered as, am using same name for both path and name. \n", " return json.dumps({{\"trainedModel\":str(trainedModel)}})\n",
" description = \"ADB trained model by Parashar\",\n", " \n",
" workspace = ws)\n", " try:\n",
"\n", " sc = spark.sparkContext\n",
"print(mymodel.name, mymodel.description, mymodel.version)" " input_list = json.loads(input_json)\n",
] " input_rdd = sc.parallelize(input_list)\n",
}, " input_df = spark.read.json(input_rdd)\n",
{ " \n",
"cell_type": "code", " # Compute prediction\n",
"execution_count": null, " prediction = trainedModel.transform(input_df)\n",
"metadata": {}, " #result = prediction.first().prediction\n",
"outputs": [], " predictions = prediction.collect()\n",
"source": [ " \n",
"#%%writefile score_sparkml.py\n", " #Get each scored result\n",
"score_sparkml = \"\"\"\n", " preds = [str(x['prediction']) for x in predictions]\n",
" \n", " result = \",\".join(preds)\n",
"import json\n", " # you can return any data type as long as it is JSON-serializable\n",
" \n", " return result.tolist()\n",
"def init():\n", " except Exception as e:\n",
" # One-time initialization of PySpark and predictive model\n", " result = str(e)\n",
" import pyspark\n", " return result\n",
" from azureml.core.model import Model\n", " \n",
" from pyspark.ml import PipelineModel\n", "\"\"\".format(model_name=model_name)\n",
" \n", " \n",
" global trainedModel\n", "exec(score_sparkml)\n",
" global spark\n", " \n",
" \n", "with open(\"score_sparkml.py\", \"w\") as file:\n",
" spark = pyspark.sql.SparkSession.builder.appName(\"ADB and AML notebook by Parashar\").getOrCreate()\n", " file.write(score_sparkml)"
" model_name = \"{model_name}\" #interpolated\n", ]
" model_path = Model.get_model_path(model_name)\n", },
" trainedModel = PipelineModel.load(model_path)\n", {
" \n", "cell_type": "code",
"def run(input_json):\n", "execution_count": null,
" if isinstance(trainedModel, Exception):\n", "metadata": {},
" return json.dumps({{\"trainedModel\":str(trainedModel)}})\n", "outputs": [],
" \n", "source": [
" try:\n", "from azureml.core.conda_dependencies import CondaDependencies \n",
" sc = spark.sparkContext\n", "\n",
" input_list = json.loads(input_json)\n", "myacienv = CondaDependencies.create(conda_packages=['scikit-learn','numpy','pandas']) #showing how to add libs as an eg. - not needed for this model.\n",
" input_rdd = sc.parallelize(input_list)\n", "\n",
" input_df = spark.read.json(input_rdd)\n", "with open(\"mydeployenv.yml\",\"w\") as f:\n",
" \n", " f.write(myacienv.serialize_to_string())"
" # Compute prediction\n", ]
" prediction = trainedModel.transform(input_df)\n", },
" #result = prediction.first().prediction\n", {
" predictions = prediction.collect()\n", "cell_type": "code",
" \n", "execution_count": null,
" #Get each scored result\n", "metadata": {},
" preds = [str(x['prediction']) for x in predictions]\n", "outputs": [],
" result = \",\".join(preds)\n", "source": [
" # you can return any data type as long as it is JSON-serializable\n", "#deploy to ACI\n",
" return result.tolist()\n", "from azureml.core.webservice import AciWebservice, Webservice\n",
" except Exception as e:\n", "\n",
" result = str(e)\n", "myaci_config = AciWebservice.deploy_configuration(\n",
" return result\n", " cpu_cores = 2, \n",
" \n", " memory_gb = 2, \n",
"\"\"\".format(model_name=model_name)\n", " tags = {'name':'Databricks Azure ML ACI'}, \n",
" \n", " description = 'This is for ADB and AML example. Azure Databricks & Azure ML SDK demo with ACI by Parashar.')"
"exec(score_sparkml)\n", ]
" \n", },
"with open(\"score_sparkml.py\", \"w\") as file:\n", {
" file.write(score_sparkml)" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "# this will take 10-15 minutes to finish\n",
"metadata": {}, "\n",
"outputs": [], "service_name = \"aciws\"\n",
"source": [ "runtime = \"spark-py\" \n",
"from azureml.core.conda_dependencies import CondaDependencies \n", "driver_file = \"score_sparkml.py\"\n",
"\n", "my_conda_file = \"mydeployenv.yml\"\n",
"myacienv = CondaDependencies.create(conda_packages=['scikit-learn','numpy','pandas']) #showing how to add libs as an eg. - not needed for this model.\n", "\n",
"\n", "# image creation\n",
"with open(\"mydeployenv.yml\",\"w\") as f:\n", "from azureml.core.image import ContainerImage\n",
" f.write(myacienv.serialize_to_string())" "myimage_config = ContainerImage.image_configuration(execution_script = driver_file, \n",
] " runtime = runtime, \n",
}, " conda_file = my_conda_file)\n",
{ "\n",
"cell_type": "code", "# Webservice creation\n",
"execution_count": null, "myservice = Webservice.deploy_from_model(\n",
"metadata": {}, " workspace=ws, \n",
"outputs": [], " name=service_name,\n",
"source": [ " deployment_config = myaci_config,\n",
"#deploy to ACI\n", " models = [mymodel],\n",
"from azureml.core.webservice import AciWebservice, Webservice\n", " image_config = myimage_config\n",
"\n", " )\n",
"myaci_config = AciWebservice.deploy_configuration(\n", "\n",
" cpu_cores = 2, \n", "myservice.wait_for_deployment(show_output=True)"
" memory_gb = 2, \n", ]
" tags = {'name':'Databricks Azure ML ACI'}, \n", },
" description = 'This is for ADB and AML example. Azure Databricks & Azure ML SDK demo with ACI by Parashar.')" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "help(Webservice)"
"outputs": [], ]
"source": [ },
"# this will take 10-15 minutes to finish\n", {
"\n", "cell_type": "code",
"service_name = \"aciws\"\n", "execution_count": null,
"runtime = \"spark-py\" \n", "metadata": {},
"driver_file = \"score_sparkml.py\"\n", "outputs": [],
"my_conda_file = \"mydeployenv.yml\"\n", "source": [
"\n", "# List images by ws\n",
"# image creation\n", "\n",
"from azureml.core.image import ContainerImage\n", "for i in ContainerImage.list(workspace = ws):\n",
"myimage_config = ContainerImage.image_configuration(execution_script = driver_file, \n", " print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))"
" runtime = runtime, \n", ]
" conda_file = my_conda_file)\n", },
"\n", {
"# Webservice creation\n", "cell_type": "code",
"myservice = Webservice.deploy_from_model(\n", "execution_count": null,
" workspace=ws, \n", "metadata": {},
" name=service_name,\n", "outputs": [],
" deployment_config = myaci_config,\n", "source": [
" models = [mymodel],\n", "#for using the Web HTTP API \n",
" image_config = myimage_config\n", "print(myservice.scoring_uri)"
" )\n", ]
"\n", },
"myservice.wait_for_deployment(show_output=True)" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "import json\n",
"outputs": [], "\n",
"source": [ "#get the some sample data\n",
"help(Webservice)" "test_data_path = \"AdultCensusIncomeTest\"\n",
] "test = spark.read.parquet(test_data_path).limit(5)\n",
}, "\n",
{ "test_json = json.dumps(test.toJSON().collect())\n",
"cell_type": "code", "\n",
"execution_count": null, "print(test_json)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"# List images by ws\n", "cell_type": "code",
"\n", "execution_count": null,
"for i in ContainerImage.list(workspace = ws):\n", "metadata": {},
" print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))" "outputs": [],
] "source": [
}, "#using data defined above predict if income is >50K (1) or <=50K (0)\n",
{ "myservice.run(input_data=test_json)"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"#for using the Web HTTP API \n", "metadata": {},
"print(myservice.scoring_uri)" "outputs": [],
] "source": [
}, "#comment to not delete the web service\n",
{ "myservice.delete()"
"cell_type": "code", ]
"execution_count": null, }
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"#get the some sample data\n",
"test_data_path = \"AdultCensusIncomeTest\"\n",
"test = spark.read.parquet(test_data_path).limit(5)\n",
"\n",
"test_json = json.dumps(test.toJSON().collect())\n",
"\n",
"print(test_json)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#using data defined above predict if income is >50K (1) or <=50K (0)\n",
"myservice.run(input_data=test_json)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#comment to not delete the web service\n",
"#myservice.delete()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "pasha"
},
{
"name": "wamartin"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3", "authors": [
"language": "python", {
"name": "python3" "name": "pasha"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"name": "deploy-to-aci-04",
"notebookId": 3836944406456376
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 1
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"name": "04.DeploytoACI",
"notebookId": 3836944406456376
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -0,0 +1,236 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Azure ML & Azure Databricks notebooks by Parashar Shah.\n",
"\n",
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook uses image from ACI notebook for deploying to AKS."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set auth to be used by workspace related APIs.\n",
"# For automation or CI/CD ServicePrincipalAuthentication can be used.\n",
"# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py\n",
"auth = None"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config(auth = auth)\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# List images by ws\n",
"\n",
"from azureml.core.image import ContainerImage\n",
"for i in ContainerImage.list(workspace = ws):\n",
" print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.image import Image\n",
"myimage = Image(workspace=ws, name=\"aciws\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#create AKS compute\n",
"#it may take 20-25 minutes to create a new cluster\n",
"\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n",
"\n",
"# Use the default configuration (can also provide parameters to customize)\n",
"prov_config = AksCompute.provisioning_configuration()\n",
"\n",
"aks_name = 'ps-aks-demo2' \n",
"\n",
"# Create the cluster\n",
"aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
" provisioning_configuration = prov_config)\n",
"\n",
"aks_target.wait_for_completion(show_output = True)\n",
"\n",
"print(aks_target.provisioning_state)\n",
"print(aks_target.provisioning_errors)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"help( Webservice.deploy_from_image)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice, AksWebservice\n",
"from azureml.core.image import ContainerImage\n",
"\n",
"#Set the web service configuration (using default here with app insights)\n",
"aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)\n",
"\n",
"#unique service name\n",
"service_name ='ps-aks-service'\n",
"\n",
"# Webservice creation using single command, there is a variant to use image directly as well.\n",
"aks_service = Webservice.deploy_from_image(\n",
" workspace=ws, \n",
" name=service_name,\n",
" deployment_config = aks_config,\n",
" image = myimage,\n",
" deployment_target = aks_target\n",
" )\n",
"\n",
"aks_service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aks_service.deployment_status"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#for using the Web HTTP API \n",
"print(aks_service.scoring_uri)\n",
"print(aks_service.get_keys())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"#get the some sample data\n",
"test_data_path = \"AdultCensusIncomeTest\"\n",
"test = spark.read.parquet(test_data_path).limit(5)\n",
"\n",
"test_json = json.dumps(test.toJSON().collect())\n",
"\n",
"print(test_json)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#using data defined above predict if income is >50K (1) or <=50K (0)\n",
"aks_service.run(input_data=test_json)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#comment to not delete the web service\n",
"aks_service.delete()\n",
"#image.delete()\n",
"#model.delete()\n",
"aks_target.delete() "
]
}
],
"metadata": {
"authors": [
{
"name": "pasha"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"name": "deploy-to-aks-existingimage-05",
"notebookId": 1030695628045968
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -1,182 +1,172 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Azure ML & Azure Databricks notebooks by Parashar Shah.\n", "Azure ML & Azure Databricks notebooks by Parashar Shah.\n",
"\n", "\n",
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"![04ACI](files/tables/image1.JPG)" "#Data Ingestion"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "code",
"metadata": {}, "execution_count": null,
"source": [ "metadata": {},
"#Data Ingestion" "outputs": [],
] "source": [
}, "import os\n",
{ "import urllib"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"import os\n", "metadata": {},
"import urllib" "outputs": [],
] "source": [
}, "# Download AdultCensusIncome.csv from Azure CDN. This file has 32,561 rows.\n",
{ "dataurl = \"https://amldockerdatasets.azureedge.net/AdultCensusIncome.csv\"\n",
"cell_type": "code", "datafile = \"AdultCensusIncome.csv\"\n",
"execution_count": null, "datafile_dbfs = os.path.join(\"/dbfs\", datafile)\n",
"metadata": {}, "\n",
"outputs": [], "if os.path.isfile(datafile_dbfs):\n",
"source": [ " print(\"found {} at {}\".format(datafile, datafile_dbfs))\n",
"# Download AdultCensusIncome.csv from Azure CDN. This file has 32,561 rows.\n", "else:\n",
"basedataurl = \"https://amldockerdatasets.azureedge.net\"\n", " print(\"downloading {} to {}\".format(datafile, datafile_dbfs))\n",
"datafile = \"AdultCensusIncome.csv\"\n", " urllib.request.urlretrieve(dataurl, datafile_dbfs)"
"datafile_dbfs = os.path.join(\"/dbfs\", datafile)\n", ]
"\n", },
"if os.path.isfile(datafile_dbfs):\n", {
" print(\"found {} at {}\".format(datafile, datafile_dbfs))\n", "cell_type": "code",
"else:\n", "execution_count": null,
" print(\"downloading {} to {}\".format(datafile, datafile_dbfs))\n", "metadata": {},
" urllib.request.urlretrieve(os.path.join(basedataurl, datafile), datafile_dbfs)" "outputs": [],
] "source": [
}, "# Create a Spark dataframe out of the csv file.\n",
{ "data_all = sqlContext.read.format('csv').options(header='true', inferSchema='true', ignoreLeadingWhiteSpace='true', ignoreTrailingWhiteSpace='true').load(datafile)\n",
"cell_type": "code", "print(\"({}, {})\".format(data_all.count(), len(data_all.columns)))\n",
"execution_count": null, "data_all.printSchema()"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"# Create a Spark dataframe out of the csv file.\n", "cell_type": "code",
"data_all = sqlContext.read.format('csv').options(header='true', inferSchema='true', ignoreLeadingWhiteSpace='true', ignoreTrailingWhiteSpace='true').load(datafile)\n", "execution_count": null,
"print(\"({}, {})\".format(data_all.count(), len(data_all.columns)))\n", "metadata": {},
"data_all.printSchema()" "outputs": [],
] "source": [
}, "#renaming columns\n",
{ "columns_new = [col.replace(\"-\", \"_\") for col in data_all.columns]\n",
"cell_type": "code", "data_all = data_all.toDF(*columns_new)\n",
"execution_count": null, "data_all.printSchema()"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"#renaming columns\n", "cell_type": "code",
"columns_new = [col.replace(\"-\", \"_\") for col in data_all.columns]\n", "execution_count": null,
"data_all = data_all.toDF(*columns_new)\n", "metadata": {},
"data_all.printSchema()" "outputs": [],
] "source": [
}, "display(data_all.limit(5))"
{ ]
"cell_type": "code", },
"execution_count": null, {
"metadata": {}, "cell_type": "markdown",
"outputs": [], "metadata": {},
"source": [ "source": [
"display(data_all.limit(5))" "#Data Preparation"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "code",
"metadata": {}, "execution_count": null,
"source": [ "metadata": {},
"#Data Preparation" "outputs": [],
] "source": [
}, "# Choose feature columns and the label column.\n",
{ "label = \"income\"\n",
"cell_type": "code", "xvars = set(data_all.columns) - {label}\n",
"execution_count": null, "\n",
"metadata": {}, "print(\"label = {}\".format(label))\n",
"outputs": [], "print(\"features = {}\".format(xvars))\n",
"source": [ "\n",
"# Choose feature columns and the label column.\n", "data = data_all.select([*xvars, label])\n",
"label = \"income\"\n", "\n",
"xvars = set(data_all.columns) - {label}\n", "# Split data into train and test.\n",
"\n", "train, test = data.randomSplit([0.75, 0.25], seed=123)\n",
"print(\"label = {}\".format(label))\n", "\n",
"print(\"features = {}\".format(xvars))\n", "print(\"train ({}, {})\".format(train.count(), len(train.columns)))\n",
"\n", "print(\"test ({}, {})\".format(test.count(), len(test.columns)))"
"data = data_all.select([*xvars, label])\n", ]
"\n", },
"# Split data into train and test.\n", {
"train, test = data.randomSplit([0.75, 0.25], seed=123)\n", "cell_type": "markdown",
"\n", "metadata": {},
"print(\"train ({}, {})\".format(train.count(), len(train.columns)))\n", "source": [
"print(\"test ({}, {})\".format(test.count(), len(test.columns)))" "#Data Persistence"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "code",
"metadata": {}, "execution_count": null,
"source": [ "metadata": {},
"#Data Persistence" "outputs": [],
] "source": [
}, "# Write the train and test data sets to intermediate storage\n",
{ "train_data_path = \"AdultCensusIncomeTrain\"\n",
"cell_type": "code", "test_data_path = \"AdultCensusIncomeTest\"\n",
"execution_count": null, "\n",
"metadata": {}, "train_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTrain\")\n",
"outputs": [], "test_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTest\")\n",
"source": [ "\n",
"# Write the train and test data sets to intermediate storage\n", "train.write.mode('overwrite').parquet(train_data_path)\n",
"train_data_path = \"AdultCensusIncomeTrain\"\n", "test.write.mode('overwrite').parquet(test_data_path)\n",
"test_data_path = \"AdultCensusIncomeTest\"\n", "print(\"train and test datasets saved to {} and {}\".format(train_data_path_dbfs, test_data_path_dbfs))"
"\n", ]
"train_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTrain\")\n", },
"test_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTest\")\n", {
"\n", "cell_type": "code",
"train.write.mode('overwrite').parquet(train_data_path)\n", "execution_count": null,
"test.write.mode('overwrite').parquet(test_data_path)\n", "metadata": {},
"print(\"train and test datasets saved to {} and {}\".format(train_data_path_dbfs, test_data_path_dbfs))" "outputs": [],
] "source": []
}, }
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "pasha"
},
{
"name": "wamartin"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3", "authors": [
"language": "python", {
"name": "python3" "name": "pasha"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"name": "ingest-data-02",
"notebookId": 3836944406456362
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 1
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"name": "02.Ingest_data",
"notebookId": 3836944406456362
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -1,264 +1,176 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Azure ML & Azure Databricks notebooks by Parashar Shah.\n", "Azure ML & Azure Databricks notebooks by Parashar Shah.\n",
"\n", "\n",
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"We support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n", "We support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n",
"\n", "\n",
"**install azureml-sdk**\n", "**install azureml-sdk**\n",
"* Source: Upload Python Egg or PyPi\n", "* Source: Upload Python Egg or PyPi\n",
"* PyPi Name: `azureml-sdk[databricks]`\n", "* PyPi Name: `azureml-sdk[databricks]`\n",
"* Select Install Library" "* Select Install Library"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"# Check core SDK version number - based on build number of preview/master.\n", "# Check core SDK version number - based on build number of preview/master.\n",
"print(\"SDK version:\", azureml.core.VERSION)" "print(\"SDK version:\", azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"![04ACI](files/tables/image2b.JPG)" "Please specify the Azure subscription Id, resource group name, workspace name, and the region in which you want to create the Azure Machine Learning Workspace.\n",
] "\n",
}, "You can get the value of your Azure subscription ID from the Azure Portal, and then selecting Subscriptions from the menu on the left.\n",
{ "\n",
"cell_type": "markdown", "For the resource_group, use the name of the resource group that contains your Azure Databricks Workspace.\n",
"metadata": {}, "\n",
"source": [ "NOTE: If you provide a resource group name that does not exist, the resource group will be automatically created. This may or may not succeed in your environment, depending on the permissions you have on your Azure Subscription."
"Please specify the Azure subscription Id, resource group name, workspace name, and the region in which you want to create the Azure Machine Learning Workspace.\n", ]
"\n", },
"You can get the value of your Azure subscription ID from the Azure Portal, and then selecting Subscriptions from the menu on the left.\n", {
"\n", "cell_type": "code",
"For the resource_group, use the name of the resource group that contains your Azure Databricks Workspace.\n", "execution_count": null,
"\n", "metadata": {},
"NOTE: If you provide a resource group name that does not exist, the resource group will be automatically created. This may or may not succeed in your environment, depending on the permissions you have on your Azure Subscription." "outputs": [],
] "source": [
}, "# subscription_id = \"<your-subscription-id>\"\n",
{ "# resource_group = \"<your-existing-resource-group>\"\n",
"cell_type": "code", "# workspace_name = \"<a-new-or-existing-workspace; it is unrelated to Databricks workspace>\"\n",
"execution_count": null, "# workspace_region = \"<your-resource group-region>\""
"metadata": {}, ]
"outputs": [], },
"source": [ {
"# subscription_id = \"<your-subscription-id>\"\n", "cell_type": "code",
"# resource_group = \"<your-existing-resource-group>\"\n", "execution_count": null,
"# workspace_name = \"<a-new-or-existing-workspace; it is unrelated to Databricks workspace>\"\n", "metadata": {},
"# workspace_region = \"<your-resource group-region>\"" "outputs": [],
] "source": [
}, "# Set auth to be used by workspace related APIs.\n",
{ "# For automation or CI/CD ServicePrincipalAuthentication can be used.\n",
"cell_type": "code", "# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py\n",
"execution_count": null, "auth = None"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"##TESTONLY\n", "cell_type": "code",
"# import auth creds from notebook parameters\n", "execution_count": null,
"tenant = dbutils.widgets.get('tenant_id')\n", "metadata": {},
"username = dbutils.widgets.get('service_principal_id')\n", "outputs": [],
"password = dbutils.widgets.get('service_principal_password')\n", "source": [
"\n", "# import the Workspace class and check the azureml SDK version\n",
"auth = azureml.core.authentication.ServicePrincipalAuthentication(tenant, username, password)" "# exist_ok checks if workspace exists or not.\n",
] "\n",
}, "from azureml.core import Workspace\n",
{ "\n",
"cell_type": "code", "ws = Workspace.create(name = workspace_name,\n",
"execution_count": null, " subscription_id = subscription_id,\n",
"metadata": {}, " resource_group = resource_group, \n",
"outputs": [], " location = workspace_region,\n",
"source": [ " auth = auth,\n",
"##TESTONLY\n", " exist_ok=True)"
"subscription_id = dbutils.widgets.get('subscription_id')\n", ]
"resource_group = dbutils.widgets.get('resource_group')\n", },
"workspace_name = dbutils.widgets.get('workspace_name')\n", {
"workspace_region = dbutils.widgets.get('workspace_region')" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "#get workspace details\n",
"metadata": {}, "ws.get_details()"
"outputs": [], ]
"source": [ },
"##TESTONLY\n", {
"# import the Workspace class and check the azureml SDK version\n", "cell_type": "code",
"# exist_ok checks if workspace exists or not.\n", "execution_count": null,
"\n", "metadata": {},
"from azureml.core import Workspace\n", "outputs": [],
"\n", "source": [
"ws = Workspace.create(name = workspace_name,\n", "ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n", " subscription_id = subscription_id,\n",
" resource_group = resource_group, \n", " resource_group = resource_group,\n",
" location = workspace_region,\n", " auth = auth)\n",
" auth = auth,\n", "\n",
" exist_ok=True)" "# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
] "ws.write_config()\n",
}, "#if you need to give a different path/filename please use this\n",
{ "#write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "code",
"source": [ "execution_count": null,
"##PUBLISHONLY\n", "metadata": {},
"## import the Workspace class and check the azureml SDK version\n", "outputs": [],
"## exist_ok checks if workspace exists or not.\n", "source": [
"#\n", "help(Workspace)"
"#from azureml.core import Workspace\n", ]
"#\n", },
"#ws = Workspace.create(name = workspace_name,\n", {
"# subscription_id = subscription_id,\n", "cell_type": "code",
"# resource_group = resource_group, \n", "execution_count": null,
"# location = workspace_region,\n", "metadata": {},
"# exist_ok=True)" "outputs": [],
] "source": [
}, "# import the Workspace class and check the azureml SDK version\n",
{ "from azureml.core import Workspace\n",
"cell_type": "code", "\n",
"execution_count": null, "ws = Workspace.from_config(auth = auth)\n",
"metadata": {}, "#ws = Workspace.from_config(<full path>)\n",
"outputs": [], "print('Workspace name: ' + ws.name, \n",
"source": [ " 'Azure region: ' + ws.location, \n",
"#get workspace details\n", " 'Subscription id: ' + ws.subscription_id, \n",
"ws.get_details()" " 'Resource group: ' + ws.resource_group, sep = '\\n')"
] ]
}, }
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##TESTONLY\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group,\n",
" auth = auth)\n",
"\n",
"# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##PUBLISHONLY\n",
"#ws = Workspace(workspace_name = workspace_name,\n",
"# subscription_id = subscription_id,\n",
"# resource_group = resource_group)\n",
"#\n",
"## persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"#ws.write_config()\n",
"###if you need to give a different path/filename please use this\n",
"###write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"help(Workspace)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##TESTONLY\n",
"# import the Workspace class and check the azureml SDK version\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config(auth = auth)\n",
"#ws = Workspace.from_config(<full path>)\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##PUBLISHONLY\n",
"## import the Workspace class and check the azureml SDK version\n",
"#from azureml.core import Workspace\n",
"#\n",
"#ws = Workspace.from_config()\n",
"##ws = Workspace.from_config(<full path>)\n",
"#print('Workspace name: ' + ws.name, \n",
"# 'Azure region: ' + ws.location, \n",
"# 'Subscription id: ' + ws.subscription_id, \n",
"# 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "pasha"
},
{
"name": "wamartin"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3", "authors": [
"language": "python", {
"name": "python3" "name": "pasha"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"name": "installation-and-configuration-01",
"notebookId": 3688394266452835
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 1
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"name": "01.Installation_and_Configuration",
"notebookId": 3836944406456490
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -0,0 +1,789 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n",
"\n",
"**install azureml-sdk with Automated ML**\n",
"* Source: Upload Python Egg or PyPi\n",
"* PyPi Name: `azureml-sdk[automl_databricks]`\n",
"* Select Install Library"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML : Classification with Local Compute on Azure DataBricks with deployment to ACI\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create Azure Machine Learning Workspace object and initialize your notebook directory to easily reload this object from a configuration file.\n",
"2. Create an `Experiment` in an existing `Workspace`.\n",
"3. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model using AzureDataBricks.\n",
"5. Explore the results.\n",
"6. Register the model.\n",
"7. Deploy the model.\n",
"8. Test the best fitted model.\n",
"\n",
"Prerequisites:\n",
"Before running this notebook, please follow the readme for installing necessary libraries to your cluster."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register Machine Learning Services Resource Provider\n",
"Microsoft.MachineLearningServices only needs to be registed once in the subscription. To register it:\n",
"Start the Azure portal.\n",
"Select your All services and then Subscription.\n",
"Select the subscription that you want to use.\n",
"Click on Resource providers\n",
"Click the Register link next to Microsoft.MachineLearningServices"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<Your SubscriptionId>\" #you should be owner or contributor\n",
"resource_group = \"<Resource group - new or existing>\" #you should be owner or contributor\n",
"workspace_name = \"<workspace to be created>\" #your workspace name\n",
"workspace_region = \"<azureregion>\" #your region"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region, \n",
" exist_ok=True)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()\n",
"write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Folder to Host Sample Projects\n",
"Finally, create a folder where all the sample projects will be hosted."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"sample_projects_folder = './sample_projects'\n",
"\n",
"if not os.path.isdir(sample_projects_folder):\n",
" os.mkdir(sample_projects_folder)\n",
" \n",
"print('Sample projects will be created in {}.'.format(sample_projects_folder))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"import time\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-classification'\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Registering Datastore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Datastore is the way to save connection information to a storage service (e.g. Azure Blob, Azure Data Lake, Azure SQL) information to your workspace so you can access them without exposing credentials in your code. The first thing you will need to do is register a datastore, you can refer to our [python SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) on how to register datastores. __Note: for best security practices, please do not check in code that contains registering datastores with secrets into your source control__\n",
"\n",
"The code below registers a datastore pointing to a publicly readable blob container."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Datastore\n",
"\n",
"datastore_name = 'demo_training'\n",
"Datastore.register_azure_blob_container(\n",
" workspace = ws, \n",
" datastore_name = datastore_name, \n",
" container_name = 'automl-notebook-data', \n",
" account_name = 'dprepdata'\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below is an example on how to register a private blob container\n",
"```python\n",
"datastore = Datastore.register_azure_blob_container(\n",
" workspace = ws, \n",
" datastore_name = 'example_datastore', \n",
" container_name = 'example-container', \n",
" account_name = 'storageaccount',\n",
" account_key = 'accountkey'\n",
")\n",
"```\n",
"The example below shows how to register an Azure Data Lake store. Please make sure you have granted the necessary permissions for the service principal to access the data lake.\n",
"```python\n",
"datastore = Datastore.register_azure_data_lake(\n",
" workspace = ws,\n",
" datastore_name = 'example_datastore',\n",
" store_name = 'adlsstore',\n",
" tenant_id = 'tenant-id-of-service-principal',\n",
" client_id = 'client-id-of-service-principal',\n",
" client_secret = 'client-secret-of-service-principal'\n",
")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Training Data Using DataPrep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Automated ML takes a Dataflow as input.\n",
"\n",
"If you are familiar with Pandas and have done your data preparation work in Pandas already, you can use the `read_pandas_dataframe` method in dprep to convert the DataFrame to a Dataflow.\n",
"```python\n",
"df = pd.read_csv(...)\n",
"# apply some transforms\n",
"dprep.read_pandas_dataframe(df, temp_folder='/path/accessible/by/both/driver/and/worker')\n",
"```\n",
"\n",
"If you just need to ingest data without doing any preparation, you can directly use AzureML Data Prep (Data Prep) to do so. The code below demonstrates this scenario. Data Prep also has data preparation capabilities, we have many [sample notebooks](https://github.com/Microsoft/AMLDataPrepDocs) demonstrating the capabilities.\n",
"\n",
"You will get the datastore you registered previously and pass it to Data Prep for reading. The data comes from the digits dataset: `sklearn.datasets.load_digits()`. `DataPath` points to a specific location within a datastore. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.dataprep as dprep\n",
"from azureml.data.datapath import DataPath\n",
"\n",
"datastore = Datastore.get(workspace = ws, name = datastore_name)\n",
"\n",
"X_train = dprep.read_csv(DataPath(datastore = datastore, path_on_datastore = 'X.csv')) \n",
"y_train = dprep.read_csv(DataPath(datastore = datastore, path_on_datastore = 'y.csv')).to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Review the Data Preparation Result\n",
"You can peek the result of a Dataflow at any range using skip(i) and head(j). Doing so evaluates only j records for all the steps in the Dataflow, which makes it fast even against large datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train.get_profile()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_train.get_profile()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**spark_context**|Spark Context object. for Databricks, use spark_context=sc|\n",
"|**max_concurrent_iterations**|Maximum number of iterations to execute in parallel. This should be <= number of worker nodes in your Azure Databricks cluster.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"|**preprocess**|set this to True to enable pre-processing of data eg. string to numeric using one-hot encoding|\n",
"|**exit_score**|Target score for experiment. It is associated with the metric. eg. exit_score=0.995 will exit experiment after that|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 10,\n",
" iterations = 30,\n",
" preprocess = True,\n",
" n_cross_validations = 10,\n",
" max_concurrent_iterations = 2, #change it based on number of worker nodes\n",
" verbosity = logging.INFO,\n",
" spark_context=sc, #databricks/spark related\n",
" X = X_train, \n",
" y = y_train,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = False) # for higher runs please use show_output=False and use the below"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Portal URL for Monitoring Runs\n",
"\n",
"The following will provide a link to the web interface to explore individual run details and status. In the future we might support output displayed in the notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"displayHTML(\"<a href={} target='_blank'>Azure Portal: {}</a>\".format(local_run.get_portal_url(), local_run.id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following will show the child runs and waits for the parent run to complete."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve All Child Runs after the experiment is completed (in portal)\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" #print(properties)\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model after the above run is complete \n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric after the above run is complete based on the child run\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the Fitted Model for Deployment\n",
"If neither metric nor iteration are specified in the register_model call, the iteration with the best primary metric is registered."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model'\n",
"tags = None\n",
"model = local_run.register_model(description = description, tags = tags)\n",
"local_run.model_id # This will be written to the scoring script file later in the notebook."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Scoring Script\n",
"Replace model_id with name of model from output of above register cell"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"import azureml.train.automl\n",
"from sklearn.externals import joblib\n",
"from azureml.core.model import Model\n",
"\n",
"\n",
"def init():\n",
" global model\n",
" model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"def run(rawdata):\n",
" try:\n",
" data = json.loads(rawdata)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
" return json.dumps({\"error\": result})\n",
" return json.dumps({\"result\":result.tolist()})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Create a YAML File for the Environment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])\n",
"\n",
"conda_env_file_name = 'mydeployenv.yml'\n",
"myenv.save_to_file('.', conda_env_file_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Create ACI config"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#deploy to ACI\n",
"from azureml.core.webservice import AciWebservice, Webservice\n",
"\n",
"myaci_config = AciWebservice.deploy_configuration(\n",
" cpu_cores = 2, \n",
" memory_gb = 2, \n",
" tags = {'name':'Databricks Azure ML ACI'}, \n",
" description = 'This is for ADB and AutoML example.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy the Image as a Web Service on Azure Container Instance\n",
"Replace servicename with any meaningful name of service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"# this will take 10-15 minutes to finish\n",
"\n",
"service_name = \"<<servicename>>\"\n",
"runtime = \"spark-py\" \n",
"driver_file = \"score.py\"\n",
"my_conda_file = \"mydeployenv.yml\"\n",
"\n",
"# image creation\n",
"from azureml.core.image import ContainerImage\n",
"myimage_config = ContainerImage.image_configuration(execution_script = driver_file, \n",
" runtime = runtime, \n",
" conda_file = 'mydeployenv.yml')\n",
"\n",
"# Webservice creation\n",
"myservice = Webservice.deploy_from_model(\n",
" workspace=ws, \n",
" name=service_name,\n",
" deployment_config = myaci_config,\n",
" models = [model],\n",
" image_config = myimage_config\n",
" )\n",
"\n",
"myservice.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#for using the Web HTTP API \n",
"print(myservice.scoring_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"\n",
"#### Load Test Data - you can split the dataset beforehand & pass Train dataset to AutoML and use Test dataset to evaluate the best model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn import datasets\n",
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will try to predict digits and see how our model works. This is just an example to show you."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" display(fig)"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
},
{
"name": "wamartin"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"name": "auto-ml-classification-local-adb",
"notebookId": 2733885892129020
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -0,0 +1 @@
Test1

View File

@@ -0,0 +1,5 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
print("In train.py")
print("As a data scientist, this is where I use my training code.")

View File

@@ -0,0 +1,5 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
print("In train.py")
print("As a data scientist, this is where I use my training code.")

View File

@@ -1,495 +1,491 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Enabling App Insights for Services in Production\n", "# Enabling App Insights for Services in Production\n",
"With this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. \n", "With this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model. \n",
"\n", "\n",
"\n", "\n",
"## What does Application Insights monitor?\n", "## What does Application Insights monitor?\n",
"It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview)\n", "It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview)\n",
"\n", "\n",
"\n", "\n",
"## What is different compared to standard production deployment process?\n", "## What is different compared to standard production deployment process?\n",
"If you want to enable generic App Insights for a service run:\n", "If you want to enable generic App Insights for a service run:\n",
"```python\n", "```python\n",
"aks_service= Webservice(ws, \"aks-w-dc2\")\n", "aks_service= Webservice(ws, \"aks-w-dc2\")\n",
"aks_service.update(enable_app_insights=True)```\n", "aks_service.update(enable_app_insights=True)```\n",
"Where \"aks-w-dc2\" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select \"Enable AppInsights diagnostics\"\n", "Where \"aks-w-dc2\" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select \"Enable AppInsights diagnostics\"\n",
"\n", "\n",
"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:\n", "If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:\n",
"1. Update scoring file.\n", "1. Update scoring file.\n",
"2. Update aks configuration.\n", "2. Update aks configuration.\n",
"3. Build new image and deploy it. " "3. Build new image and deploy it. "
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 1. Import your dependencies" "## 1. Import your dependencies"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Workspace, Run\n", "from azureml.core import Workspace\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n", "from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.webservice import Webservice, AksWebservice\n", "from azureml.core.webservice import AksWebservice\n",
"from azureml.core.image import Image\n", "import azureml.core\n",
"from azureml.core.model import Model\n", "import json\n",
"\n", "print(azureml.core.VERSION)"
"import azureml.core\n", ]
"print(azureml.core.VERSION)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## 2. Set up your configuration and create a workspace\n"
"source": [ ]
"## 2. Set up your configuration and create a workspace\n" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "ws = Workspace.from_config()\n",
"source": [ "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
"ws = Workspace.from_config()\n", ]
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## 3. Register Model\n",
"source": [ "Register an existing trained model, add descirption and tags."
"## 3. Register Model\n", ]
"Register an existing trained model, add descirption and tags." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "#Register the model\n",
"source": [ "from azureml.core.model import Model\n",
"#Register the model\n", "model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
"from azureml.core.model import Model\n", " model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n", " tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n", " description = \"Ridge regression model to predict diabetes\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n", " workspace = ws)\n",
" description = \"Ridge regression model to predict diabetes\",\n", "\n",
" workspace = ws)\n", "print(model.name, model.description, model.version)"
"\n", ]
"print(model.name, model.description, model.version)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## 4. *Update your scoring file with custom print statements*\n",
"source": [ "Here is an example:\n",
"## 4. *Update your scoring file with custom print statements*\n", "### a. In your init function add:\n",
"Here is an example:\n", "```python\n",
"### a. In your init function add:\n", "print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))```\n",
"```python\n", "\n",
"print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))```\n", "### b. In your run function add:\n",
"\n", "```python\n",
"### b. In your run function add:\n", "print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))```"
"```python\n", ]
"print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))```" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "%%writefile score.py\n",
"source": [ "import pickle\n",
"%%writefile score.py\n", "import json\n",
"import pickle\n", "import numpy \n",
"import json\n", "from sklearn.externals import joblib\n",
"import numpy \n", "from sklearn.linear_model import Ridge\n",
"from sklearn.externals import joblib\n", "from azureml.core.model import Model\n",
"from sklearn.linear_model import Ridge\n", "import time\n",
"from azureml.core.model import Model\n", "\n",
"import time\n", "def init():\n",
"\n", " global model\n",
"def init():\n", " #Print statement for appinsights custom traces:\n",
" global model\n", " print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))\n",
" #Print statement for appinsights custom traces:\n", " \n",
" print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))\n", " # note here \"sklearn_regression_model.pkl\" is the name of the model registered under the workspace\n",
" \n", " # this call should return the path to the model.pkl file on the local disk.\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under the workspace\n", " model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')\n",
" # this call should return the path to the model.pkl file on the local disk.\n", " \n",
" model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')\n", " # deserialize the model file back into a sklearn model\n",
" \n", " model = joblib.load(model_path)\n",
" # deserialize the model file back into a sklearn model\n", " \n",
" model = joblib.load(model_path)\n", "\n",
" \n", "# note you can pass in multiple rows for scoring\n",
"\n", "def run(raw_data):\n",
"# note you can pass in multiple rows for scoring\n", " try:\n",
"def run(raw_data):\n", " data = json.loads(raw_data)['data']\n",
" try:\n", " data = numpy.array(data)\n",
" data = json.loads(raw_data)['data']\n", " result = model.predict(data)\n",
" data = numpy.array(data)\n", " print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))\n",
" result = model.predict(data)\n", " # you can return any datatype as long as it is JSON-serializable\n",
" print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))\n", " return result.tolist()\n",
" # you can return any datatype as long as it is JSON-serializable\n", " except Exception as e:\n",
" return result.tolist()\n", " error = str(e)\n",
" except Exception as e:\n", " print (error + time.strftime(\"%H:%M:%S\"))\n",
" error = str(e)\n", " return error"
" print (error + time.strftime(\"%H:%M:%S\"))\n", ]
" return error" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## 5. *Create myenv.yml file*"
"source": [ ]
"## 5. *Create myenv.yml file*" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from azureml.core.conda_dependencies import CondaDependencies \n",
"source": [ "\n",
"from azureml.core.conda_dependencies import CondaDependencies \n", "myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"\n", "\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n", "with open(\"myenv.yml\",\"w\") as f:\n",
"\n", " f.write(myenv.serialize_to_string())"
"with open(\"myenv.yml\",\"w\") as f:\n", ]
" f.write(myenv.serialize_to_string())" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## 6. Create your new Image"
"source": [ ]
"## 6. Create your new Image" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from azureml.core.image import ContainerImage\n",
"source": [ "\n",
"from azureml.core.image import ContainerImage\n", "image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
"\n", " runtime = \"python\",\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n", " conda_file = \"myenv.yml\",\n",
" runtime = \"python\",\n", " description = \"Image with ridge regression model\",\n",
" conda_file = \"myenv.yml\",\n", " tags = {'area': \"diabetes\", 'type': \"regression\"}\n",
" description = \"Image with ridge regression model\",\n", " )\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}\n", "\n",
" )\n", "image = ContainerImage.create(name = \"myimage1\",\n",
"\n", " # this is the model object\n",
"image = ContainerImage.create(name = \"myimage1\",\n", " models = [model],\n",
" # this is the model object\n", " image_config = image_config,\n",
" models = [model],\n", " workspace = ws)\n",
" image_config = image_config,\n", "\n",
" workspace = ws)\n", "image.wait_for_creation(show_output = True)"
"\n", ]
"image.wait_for_creation(show_output = True)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## Deploy to ACI (Optional)"
"source": [ ]
"## Deploy to ACI (Optional)" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from azureml.core.webservice import AciWebservice\n",
"source": [ "\n",
"from azureml.core.webservice import AciWebservice\n", "aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
"\n", " memory_gb = 1, \n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n", " tags = {'area': \"diabetes\", 'type': \"regression\"}, \n",
" memory_gb = 1, \n", " description = 'Predict diabetes using regression model',\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}, \n", " enable_app_insights = True)"
" description = 'Predict diabetes using regression model',\n", ]
" enable_app_insights = True)" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from azureml.core.webservice import Webservice\n",
"source": [ "\n",
"from azureml.core.webservice import Webservice\n", "aci_service_name = 'my-aci-service-4'\n",
"\n", "print(aci_service_name)\n",
"aci_service_name = 'my-aci-service-4'\n", "aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
"print(aci_service_name)\n", " image = image,\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n", " name = aci_service_name,\n",
" image = image,\n", " workspace = ws)\n",
" name = aci_service_name,\n", "aci_service.wait_for_deployment(True)\n",
" workspace = ws)\n", "print(aci_service.state)"
"aci_service.wait_for_deployment(True)\n", ]
"print(aci_service.state)" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "%%time\n",
"source": [ "\n",
"%%time\n", "test_sample = json.dumps({'data': [\n",
"import json\n", " [1,28,13,45,54,6,57,8,8,10], \n",
"\n", " [101,9,8,37,6,45,4,3,2,41]\n",
"test_sample = json.dumps({'data': [\n", "]})\n",
" [1,28,13,45,54,6,57,8,8,10], \n", "test_sample = bytes(test_sample,encoding='utf8')"
" [101,9,8,37,6,45,4,3,2,41]\n", ]
"]})\n", },
"test_sample = bytes(test_sample,encoding='utf8')" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "if aci_service.state == \"Healthy\":\n",
"outputs": [], " prediction = aci_service.run(input_data=test_sample)\n",
"source": [ " print(prediction)\n",
"if aci_service.state == \"Healthy\":\n", "else:\n",
" prediction = aci_service.run(input_data=test_sample)\n", " raise ValueError(\"Service deployment isn't healthy, can't call the service\")"
" print(prediction)\n", ]
"else:\n", },
" raise ValueError(\"Service deployment isn't healthy, can't call the service\")" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## 7. Deploy to AKS service"
"metadata": {}, ]
"source": [ },
"## 7. Deploy to AKS service" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "### Create AKS compute if you haven't done so."
"metadata": {}, ]
"source": [ },
"### Create AKS compute if you haven't done so." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "# Use the default configuration (can also provide parameters to customize)\n",
"outputs": [], "prov_config = AksCompute.provisioning_configuration()\n",
"source": [ "\n",
"# Use the default configuration (can also provide parameters to customize)\n", "aks_name = 'my-aks-test3' \n",
"prov_config = AksCompute.provisioning_configuration()\n", "# Create the cluster\n",
"\n", "aks_target = ComputeTarget.create(workspace = ws, \n",
"aks_name = 'my-aks-test3' \n", " name = aks_name, \n",
"# Create the cluster\n", " provisioning_configuration = prov_config)"
"aks_target = ComputeTarget.create(workspace = ws, \n", ]
" name = aks_name, \n", },
" provisioning_configuration = prov_config)" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "%%time\n",
"outputs": [], "aks_target.wait_for_completion(show_output = True)"
"source": [ ]
"%%time\n", },
"aks_target.wait_for_completion(show_output = True)" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "print(aks_target.provisioning_state)\n",
"outputs": [], "print(aks_target.provisioning_errors)"
"source": [ ]
"print(aks_target.provisioning_state)\n", },
"print(aks_target.provisioning_errors)" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "If you already have a cluster you can attach the service to it:"
"metadata": {}, ]
"source": [ },
"If you already have a cluster you can attach the service to it:" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "```python \n",
"metadata": {}, "%%time\n",
"source": [ "resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n",
"```python \n", "create_name= 'myaks4'\n",
"%%time\n", "attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
"resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n", "aks_target = ComputeTarget.attach(workspace = ws, \n",
"create_name= 'myaks4'\n", " name = create_name, \n",
"attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n", " attach_configuration=attach_config)\n",
"aks_target = ComputeTarget.attach(workspace = ws, \n", "## Wait for the operation to complete\n",
" name = create_name, \n", "aks_target.wait_for_provisioning(True)```"
" attach_configuration=attach_config)\n", ]
"## Wait for the operation to complete\n", },
"aks_target.wait_for_provisioning(True)```" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "### a. *Activate App Insights through updating AKS Webservice configuration*\n",
"metadata": {}, "In order to enable App Insights in your service you will need to update your AKS configuration file:"
"source": [ ]
"### a. *Activate App Insights through updating AKS Webservice configuration*\n", },
"In order to enable App Insights in your service you will need to update your AKS configuration file:" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "#Set the web service configuration\n",
"outputs": [], "aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)"
"source": [ ]
"#Set the web service configuration\n", },
"aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "### b. Deploy your service"
"metadata": {}, ]
"source": [ },
"### b. Deploy your service" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "if aks_target.provisioning_state== \"Succeeded\": \n",
"outputs": [], " aks_service_name ='aks-w-dc5'\n",
"source": [ " aks_service = Webservice.deploy_from_image(workspace = ws, \n",
"if aks_target.provisioning_state== \"Succeeded\": \n", " name = aks_service_name,\n",
" aks_service_name ='aks-w-dc5'\n", " image = image,\n",
" aks_service = Webservice.deploy_from_image(workspace = ws, \n", " deployment_config = aks_config,\n",
" name = aks_service_name,\n", " deployment_target = aks_target\n",
" image = image,\n", " )\n",
" deployment_config = aks_config,\n", " aks_service.wait_for_deployment(show_output = True)\n",
" deployment_target = aks_target\n", " print(aks_service.state)\n",
" )\n", "else:\n",
" aks_service.wait_for_deployment(show_output = True)\n", " raise ValueError(\"AKS provisioning failed.\")"
" print(aks_service.state)\n", ]
"else:\n", },
" raise ValueError(\"AKS provisioning failed.\")" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## 8. Test your service "
"metadata": {}, ]
"source": [ },
"## 8. Test your service " {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "%%time\n",
"outputs": [], "\n",
"source": [ "test_sample = json.dumps({'data': [\n",
"%%time\n", " [1,28,13,45,54,6,57,8,8,10], \n",
"import json\n", " [101,9,8,37,6,45,4,3,2,41]\n",
"\n", "]})\n",
"test_sample = json.dumps({'data': [\n", "test_sample = bytes(test_sample,encoding='utf8')\n",
" [1,28,13,45,54,6,57,8,8,10], \n", "\n",
" [101,9,8,37,6,45,4,3,2,41]\n", "if aks_service.state == \"Healthy\":\n",
"]})\n", " prediction = aks_service.run(input_data=test_sample)\n",
"test_sample = bytes(test_sample,encoding='utf8')\n", " print(prediction)\n",
"\n", "else:\n",
"if aks_service.state == \"Healthy\":\n", " raise ValueError(\"Service deployment isn't healthy, can't call the service\")"
" prediction = aks_service.run(input_data=test_sample)\n", ]
" print(prediction)\n", },
"else:\n", {
" raise ValueError(\"Service deployment isn't healthy, can't call the service\")" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## 9. See your service telemetry in App Insights\n",
"cell_type": "markdown", "1. Go to the [Azure Portal](https://portal.azure.com/)\n",
"metadata": {}, "2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type\n",
"source": [ "3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.\n",
"## 9. See your service telemetry in App Insights\n", "4. Click on the top banner \"Analytics\"\n",
"1. Go to the [Azure Portal](https://portal.azure.com/)\n", "5. In the \"Schema\" section select \"traces\" and run your query.\n",
"2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type\n", "6. Voila! All your custom traces should be there."
"3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.\n", ]
"4. Click on the top banner \"Analytics\"\n", },
"5. In the \"Schema\" section select \"traces\" and run your query.\n", {
"6. Voila! All your custom traces should be there." "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "# Disable App Insights"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"# Disable App Insights" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "aks_service.update(enable_app_insights=False)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"aks_service.update(enable_app_insights=False)" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Clean up"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"## Clean up" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "%%time\n",
"metadata": {}, "aks_service.delete()\n",
"outputs": [], "aci_service.delete()\n",
"source": [ "image.delete()\n",
"%%time\n", "model.delete()"
"aks_service.delete()\n", ]
"aci_service.delete()\n", }
"image.delete()\n",
"model.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "marthalc"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python [default]", "authors": [
"language": "python", {
"name": "python3" "name": "jocier"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,477 +1,471 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Enabling Data Collection for Models in Production\n", "# Enabling Data Collection for Models in Production\n",
"With this notebook, you can learn how to collect input model data from your Azure Machine Learning service in an Azure Blob storage. Once enabled, this data collected gives you the opportunity:\n", "With this notebook, you can learn how to collect input model data from your Azure Machine Learning service in an Azure Blob storage. Once enabled, this data collected gives you the opportunity:\n",
"\n", "\n",
"* Monitor data drifts as production data enters your model\n", "* Monitor data drifts as production data enters your model\n",
"* Make better decisions on when to retrain or optimize your model\n", "* Make better decisions on when to retrain or optimize your model\n",
"* Retrain your model with the data collected\n", "* Retrain your model with the data collected\n",
"\n", "\n",
"## What data is collected?\n", "## What data is collected?\n",
"* Model input data (voice, images, and video are not supported) from services deployed in Azure Kubernetes Cluster (AKS)\n", "* Model input data (voice, images, and video are not supported) from services deployed in Azure Kubernetes Cluster (AKS)\n",
"* Model predictions using production input data.\n", "* Model predictions using production input data.\n",
"\n", "\n",
"**Note:** pre-aggregation or pre-calculations on this data are done by user and not included in this version of the product.\n", "**Note:** pre-aggregation or pre-calculations on this data are done by user and not included in this version of the product.\n",
"\n", "\n",
"## What is different compared to standard production deployment process?\n", "## What is different compared to standard production deployment process?\n",
"1. Update scoring file.\n", "1. Update scoring file.\n",
"2. Update yml file with new dependency.\n", "2. Update yml file with new dependency.\n",
"3. Update aks configuration.\n", "3. Update aks configuration.\n",
"4. Build new image and deploy it. " "4. Build new image and deploy it. "
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 1. Import your dependencies" "## 1. Import your dependencies"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Workspace, Run\n", "from azureml.core import Workspace\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n", "from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.webservice import Webservice, AksWebservice\n", "from azureml.core.webservice import Webservice, AksWebservice\n",
"from azureml.core.image import Image\n", "import azureml.core\n",
"from azureml.core.model import Model\n", "print(azureml.core.VERSION)"
"\n", ]
"import azureml.core\n", },
"print(azureml.core.VERSION)" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## 2. Set up your configuration and create a workspace"
"metadata": {}, ]
"source": [ },
"## 2. Set up your configuration and create a workspace\n", {
"Follow Notebook 00 instructions to do this.\n" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "ws = Workspace.from_config()\n",
"metadata": {}, "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
"outputs": [], ]
"source": [ },
"ws = Workspace.from_config()\n", {
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## 3. Register Model\n",
"cell_type": "markdown", "Register an existing trained model, add descirption and tags."
"metadata": {}, ]
"source": [ },
"## 3. Register Model\n", {
"Register an existing trained model, add descirption and tags." "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "#Register the model\n",
"metadata": {}, "from azureml.core.model import Model\n",
"outputs": [], "model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
"source": [ " model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n",
"#Register the model\n", " tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
"from azureml.core.model import Model\n", " description = \"Ridge regression model to predict diabetes\",\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n", " workspace = ws)\n",
" model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n", "\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n", "print(model.name, model.description, model.version)"
" description = \"Ridge regression model to predict diabetes\",\n", ]
" workspace = ws)\n", },
"\n", {
"print(model.name, model.description, model.version)" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## 4. *Update your scoring file with Data Collection*\n",
"cell_type": "markdown", "The file below, compared to the file used in notebook 11, has the following changes:\n",
"metadata": {}, "### a. Import the module\n",
"source": [ "```python \n",
"## 4. *Update your scoring file with Data Collection*\n", "from azureml.monitoring import ModelDataCollector```\n",
"The file below, compared to the file used in notebook 11, has the following changes:\n", "### b. In your init function add:\n",
"### a. Import the module\n", "```python \n",
"```python \n", "global inputs_dc, prediction_d\n",
"from azureml.monitoring import ModelDataCollector```\n", "inputs_dc = ModelDataCollector(\"best_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\", \"feat3\", \"feat4\", \"feat5\", \"Feat6\"])\n",
"### b. In your init function add:\n", "prediction_dc = ModelDataCollector(\"best_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"])```\n",
"```python \n", " \n",
"global inputs_dc, prediction_d\n", "* Identifier: Identifier is later used for building the folder structure in your Blob, it can be used to divide \"raw\" data versus \"processed\".\n",
"inputs_dc = ModelDataCollector(\"best_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\", \"feat3\", \"feat4\", \"feat5\", \"Feat6\"])\n", "* CorrelationId: is an optional parameter, you do not need to set it up if your model doesn't require it. Having a correlationId in place does help you for easier mapping with other data. (Examples include: LoanNumber, CustomerId, etc.)\n",
"prediction_dc = ModelDataCollector(\"best_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"])```\n", "* Feature Names: These need to be set up in the order of your features in order for them to have column names when the .csv is created.\n",
" \n", "\n",
"* Identifier: Identifier is later used for building the folder structure in your Blob, it can be used to divide \"raw\" data versus \"processed\".\n", "### c. In your run function add:\n",
"* CorrelationId: is an optional parameter, you do not need to set it up if your model doesn't require it. Having a correlationId in place does help you for easier mapping with other data. (Examples include: LoanNumber, CustomerId, etc.)\n", "```python\n",
"* Feature Names: These need to be set up in the order of your features in order for them to have column names when the .csv is created.\n", "inputs_dc.collect(data)\n",
"\n", "prediction_dc.collect(result)```"
"### c. In your run function add:\n", ]
"```python\n", },
"inputs_dc.collect(data)\n", {
"prediction_dc.collect(result)```" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "%%writefile score.py\n",
"metadata": {}, "import pickle\n",
"outputs": [], "import json\n",
"source": [ "import numpy \n",
"%%writefile score.py\n", "from sklearn.externals import joblib\n",
"import pickle\n", "from sklearn.linear_model import Ridge\n",
"import json\n", "from azureml.core.model import Model\n",
"import numpy \n", "from azureml.monitoring import ModelDataCollector\n",
"from sklearn.externals import joblib\n", "import time\n",
"from sklearn.linear_model import Ridge\n", "\n",
"from azureml.core.model import Model\n", "def init():\n",
"from azureml.monitoring import ModelDataCollector\n", " global model\n",
"import time\n", " print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))\n",
"\n", " # note here \"sklearn_regression_model.pkl\" is the name of the model registered under the workspace\n",
"def init():\n", " # this call should return the path to the model.pkl file on the local disk.\n",
" global model\n", " model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')\n",
" print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))\n", " # deserialize the model file back into a sklearn model\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under the workspace\n", " model = joblib.load(model_path)\n",
" # this call should return the path to the model.pkl file on the local disk.\n", " global inputs_dc, prediction_dc\n",
" model_path = Model.get_model_path(model_name = 'sklearn_regression_model.pkl')\n", " # this setup will help us save our inputs under the \"inputs\" path in our Azure Blob\n",
" # deserialize the model file back into a sklearn model\n", " inputs_dc = ModelDataCollector(model_name=\"sklearn_regression_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\"]) \n",
" model = joblib.load(model_path)\n", " # this setup will help us save our ipredictions under the \"predictions\" path in our Azure Blob\n",
" global inputs_dc, prediction_dc\n", " prediction_dc = ModelDataCollector(\"sklearn_regression_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"]) \n",
" # this setup will help us save our inputs under the \"inputs\" path in our Azure Blob\n", " \n",
" inputs_dc = ModelDataCollector(model_name=\"sklearn_regression_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\"]) \n", "# note you can pass in multiple rows for scoring\n",
" # this setup will help us save our ipredictions under the \"predictions\" path in our Azure Blob\n", "def run(raw_data):\n",
" prediction_dc = ModelDataCollector(\"sklearn_regression_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"]) \n", " global inputs_dc, prediction_dc\n",
" \n", " try:\n",
"# note you can pass in multiple rows for scoring\n", " data = json.loads(raw_data)['data']\n",
"def run(raw_data):\n", " data = numpy.array(data)\n",
" global inputs_dc, prediction_dc\n", " result = model.predict(data)\n",
" try:\n", " print (\"saving input data\" + time.strftime(\"%H:%M:%S\"))\n",
" data = json.loads(raw_data)['data']\n", " inputs_dc.collect(data) #this call is saving our input data into our blob\n",
" data = numpy.array(data)\n", " prediction_dc.collect(result)#this call is saving our prediction data into our blob\n",
" result = model.predict(data)\n", " print (\"saving prediction data\" + time.strftime(\"%H:%M:%S\"))\n",
" print (\"saving input data\" + time.strftime(\"%H:%M:%S\"))\n", " # you can return any data type as long as it is JSON-serializable\n",
" inputs_dc.collect(data) #this call is saving our input data into our blob\n", " return result.tolist()\n",
" prediction_dc.collect(result)#this call is saving our prediction data into our blob\n", " except Exception as e:\n",
" print (\"saving prediction data\" + time.strftime(\"%H:%M:%S\"))\n", " error = str(e)\n",
" # you can return any data type as long as it is JSON-serializable\n", " print (error + time.strftime(\"%H:%M:%S\"))\n",
" return result.tolist()\n", " return error"
" except Exception as e:\n", ]
" error = str(e)\n", },
" print (error + time.strftime(\"%H:%M:%S\"))\n", {
" return error" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## 5. *Update your myenv.yml file with the required module*"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"## 5. *Update your myenv.yml file with the required module*" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "from azureml.core.conda_dependencies import CondaDependencies \n",
"metadata": {}, "\n",
"outputs": [], "myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"source": [ "myenv.add_pip_package(\"azureml-monitoring\")\n",
"from azureml.core.conda_dependencies import CondaDependencies \n", "\n",
"\n", "with open(\"myenv.yml\",\"w\") as f:\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n", " f.write(myenv.serialize_to_string())"
"myenv.add_pip_package(\"azureml-monitoring\")\n", ]
"\n", },
"with open(\"myenv.yml\",\"w\") as f:\n", {
" f.write(myenv.serialize_to_string())" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## 6. Create your new Image"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"## 6. Create your new Image" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "from azureml.core.image import ContainerImage\n",
"metadata": {}, "\n",
"outputs": [], "image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
"source": [ " runtime = \"python\",\n",
"from azureml.core.image import ContainerImage\n", " conda_file = \"myenv.yml\",\n",
"\n", " description = \"Image with ridge regression model\",\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n", " tags = {'area': \"diabetes\", 'type': \"regression\"}\n",
" runtime = \"python\",\n", " )\n",
" conda_file = \"myenv.yml\",\n", "\n",
" description = \"Image with ridge regression model\",\n", "image = ContainerImage.create(name = \"myimage1\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}\n", " # this is the model object\n",
" )\n", " models = [model],\n",
"\n", " image_config = image_config,\n",
"image = ContainerImage.create(name = \"myimage1\",\n", " workspace = ws)\n",
" # this is the model object\n", "\n",
" models = [model],\n", "image.wait_for_creation(show_output = True)"
" image_config = image_config,\n", ]
" workspace = ws)\n", },
"\n", {
"image.wait_for_creation(show_output = True)" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "print(model.name, model.description, model.version)"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"print(model.name, model.description, model.version)" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## 7. Deploy to AKS service"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"## 7. Deploy to AKS service" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "### Create AKS compute if you haven't done so."
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Create AKS compute if you haven't done so." "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "# Use the default configuration (can also provide parameters to customize)\n",
"metadata": {}, "prov_config = AksCompute.provisioning_configuration()\n",
"outputs": [], "\n",
"source": [ "aks_name = 'my-aks-test1' \n",
"# Use the default configuration (can also provide parameters to customize)\n", "# Create the cluster\n",
"prov_config = AksCompute.provisioning_configuration()\n", "aks_target = ComputeTarget.create(workspace = ws, \n",
"\n", " name = aks_name, \n",
"aks_name = 'my-aks-test1' \n", " provisioning_configuration = prov_config)"
"# Create the cluster\n", ]
"aks_target = ComputeTarget.create(workspace = ws, \n", },
" name = aks_name, \n", {
" provisioning_configuration = prov_config)" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "%%time\n",
"metadata": {}, "aks_target.wait_for_completion(show_output = True)\n",
"outputs": [], "print(aks_target.provisioning_state)\n",
"source": [ "print(aks_target.provisioning_errors)"
"%%time\n", ]
"aks_target.wait_for_completion(show_output = True)\n", },
"print(aks_target.provisioning_state)\n", {
"print(aks_target.provisioning_errors)" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "If you already have a cluster you can attach the service to it:"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"If you already have a cluster you can attach the service to it:" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "```python \n",
"cell_type": "markdown", " %%time\n",
"metadata": { " resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n",
"scrolled": true " create_name= 'myaks4'\n",
}, " attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
"source": [ " aks_target = ComputeTarget.attach(workspace = ws, \n",
"```python \n", " name = create_name, \n",
" %%time\n", " attach_configuration=attach_config)\n",
" resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n", " ## Wait for the operation to complete\n",
" create_name= 'myaks4'\n", " aks_target.wait_for_provisioning(True)```"
" attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n", ]
" aks_target = ComputeTarget.attach(workspace = ws, \n", },
" name = create_name, \n", {
" attach_configuration=attach_config)\n", "cell_type": "markdown",
" ## Wait for the operation to complete\n", "metadata": {},
" aks_target.wait_for_provisioning(True)```" "source": [
] "### a. *Activate Data Collection and App Insights through updating AKS Webservice configuration*\n",
}, "In order to enable Data Collection and App Insights in your service you will need to update your AKS configuration file:"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "code",
"### a. *Activate Data Collection and App Insights through updating AKS Webservice configuration*\n", "execution_count": null,
"In order to enable Data Collection and App Insights in your service you will need to update your AKS configuration file:" "metadata": {},
] "outputs": [],
}, "source": [
{ "#Set the web service configuration\n",
"cell_type": "code", "aks_config = AksWebservice.deploy_configuration(collect_model_data=True, enable_app_insights=True)"
"execution_count": null, ]
"metadata": {}, },
"outputs": [], {
"source": [ "cell_type": "markdown",
"#Set the web service configuration\n", "metadata": {},
"aks_config = AksWebservice.deploy_configuration(collect_model_data=True, enable_app_insights=True)" "source": [
] "### b. Deploy your service"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "code",
"source": [ "execution_count": null,
"### b. Deploy your service" "metadata": {},
] "outputs": [],
}, "source": [
{ "if aks_target.provisioning_state== \"Succeeded\": \n",
"cell_type": "code", " aks_service_name ='aks-w-dc0'\n",
"execution_count": null, " aks_service = Webservice.deploy_from_image(workspace = ws, \n",
"metadata": {}, " name = aks_service_name,\n",
"outputs": [], " image = image,\n",
"source": [ " deployment_config = aks_config,\n",
"if aks_target.provisioning_state== \"Succeeded\": \n", " deployment_target = aks_target\n",
" aks_service_name ='aks-w-dc0'\n", " )\n",
" aks_service = Webservice.deploy_from_image(workspace = ws, \n", " aks_service.wait_for_deployment(show_output = True)\n",
" name = aks_service_name,\n", " print(aks_service.state)\n",
" image = image,\n", "else: \n",
" deployment_config = aks_config,\n", " raise ValueError(\"aks provisioning failed, can't deploy service\")"
" deployment_target = aks_target\n", ]
" )\n", },
" aks_service.wait_for_deployment(show_output = True)\n", {
" print(aks_service.state)\n", "cell_type": "markdown",
"else: \n", "metadata": {},
" raise ValueError(\"aks provisioning failed, can't deploy service\")" "source": [
] "## 8. Test your service and send some data\n",
}, "**Note**: It will take around 15 mins for your data to appear in your blob.\n",
{ "The data will appear in your Azure Blob following this format:\n",
"cell_type": "markdown", "\n",
"metadata": {}, "/modeldata/subscriptionid/resourcegroupname/workspacename/webservicename/modelname/modelversion/identifier/year/month/day/data.csv "
"source": [ ]
"## 8. Test your service and send some data\n", },
"**Note**: It will take around 15 mins for your data to appear in your blob.\n", {
"The data will appear in your Azure Blob following this format:\n", "cell_type": "code",
"\n", "execution_count": null,
"/modeldata/subscriptionid/resourcegroupname/workspacename/webservicename/modelname/modelversion/identifier/year/month/day/data.csv " "metadata": {},
] "outputs": [],
}, "source": [
{ "%%time\n",
"cell_type": "code", "import json\n",
"execution_count": null, "\n",
"metadata": {}, "test_sample = json.dumps({'data': [\n",
"outputs": [], " [1,2,3,4,54,6,7,8,88,10], \n",
"source": [ " [10,9,8,37,36,45,4,33,2,1]\n",
"%%time\n", "]})\n",
"import json\n", "test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n", "\n",
"test_sample = json.dumps({'data': [\n", "if aks_service.state == \"Healthy\":\n",
" [1,2,3,4,54,6,7,8,88,10], \n", " prediction = aks_service.run(input_data=test_sample)\n",
" [10,9,8,37,36,45,4,33,2,1]\n", " print(prediction)\n",
"]})\n", "else:\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n", " raise ValueError(\"Service deployment isn't healthy, can't call the service\")"
"\n", ]
"if aks_service.state == \"Healthy\":\n", },
" prediction = aks_service.run(input_data=test_sample)\n", {
" print(prediction)\n", "cell_type": "markdown",
"else:\n", "metadata": {},
" raise ValueError(\"Service deployment isn't healthy, can't call the service\")" "source": [
] "## 9. Validate you data and analyze it\n",
}, "You can look into your data following this path format in your Azure Blob (it takes up to 15 minutes for the data to appear):\n",
{ "\n",
"cell_type": "markdown", "/modeldata/**subscriptionid>**/**resourcegroupname>**/**workspacename>**/**webservicename>**/**modelname>**/**modelversion>>**/**identifier>**/*year/month/day*/data.csv \n",
"metadata": {}, "\n",
"source": [ "For doing further analysis you have multiple options:"
"## 9. Validate you data and analyze it\n", ]
"You can look into your data following this path format in your Azure Blob (it takes up to 15 minutes for the data to appear):\n", },
"\n", {
"/modeldata/**subscriptionid>**/**resourcegroupname>**/**workspacename>**/**webservicename>**/**modelname>**/**modelversion>>**/**identifier>**/*year/month/day*/data.csv \n", "cell_type": "markdown",
"\n", "metadata": {},
"For doing further analysis you have multiple options:" "source": [
] "### a. Create DataBricks cluter and connect it to your blob\n",
}, "https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal or in your databricks workspace you can look for the template \"Azure Blob Storage Import Example Notebook\".\n",
{ "\n",
"cell_type": "markdown", "\n",
"metadata": {}, "Here is an example for setting up the file location to extract the relevant data:\n",
"source": [ "\n",
"### a. Create DataBricks cluter and connect it to your blob\n", "<code> file_location = \"wasbs://mycontainer@storageaccountname.blob.core.windows.net/unknown/unknown/unknown-bigdataset-unknown/my_iterate_parking_inputs/2018/&deg;/&deg;/data.csv\" \n",
"https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal or in your databricks workspace you can look for the template \"Azure Blob Storage Import Example Notebook\".\n", "file_type = \"csv\"</code>\n"
"\n", ]
"\n", },
"Here is an example for setting up the file location to extract the relevant data:\n", {
"\n", "cell_type": "markdown",
"<code> file_location = \"wasbs://mycontainer@storageaccountname.blob.core.windows.net/unknown/unknown/unknown-bigdataset-unknown/my_iterate_parking_inputs/2018/&deg;/&deg;/data.csv\" \n", "metadata": {},
"file_type = \"csv\"</code>\n" "source": [
] "### b. Connect Blob to Power Bi (Small Data only)\n",
}, "1. Download and Open PowerBi Desktop\n",
{ "2. Select \"Get Data\" and click on \"Azure Blob Storage\" >> Connect\n",
"cell_type": "markdown", "3. Add your storage account and enter your storage key.\n",
"metadata": {}, "4. Select the container where your Data Collection is stored and click on Edit. \n",
"source": [ "5. In the query editor, click under \"Name\" column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3\n",
"### b. Connect Blob to Power Bi (Small Data only)\n", "6. Click on the double arrow aside the \"Content\" column to combine the files. \n",
"1. Download and Open PowerBi Desktop\n", "7. Click OK and the data will preload.\n",
"2. Select “Get Data” and click on “Azure Blob Storage” >> Connect\n", "8. You can now click Close and Apply and start building your custom reports on your Model Input data."
"3. Add your storage account and enter your storage key.\n", ]
"4. Select the container where your Data Collection is stored and click on Edit. \n", },
"5. In the query editor, click under “Name” column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3\n", {
"6. Click on the double arrow aside the “Content” column to combine the files. \n", "cell_type": "markdown",
"7. Click OK and the data will preload.\n", "metadata": {},
"8. You can now click Close and Apply and start building your custom reports on your Model Input data." "source": [
] "# Disable Data Collection"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "code",
"source": [ "execution_count": null,
"# Disable Data Collection" "metadata": {},
] "outputs": [],
}, "source": [
{ "aks_service.update(collect_model_data=False)"
"cell_type": "code", ]
"execution_count": null, },
"metadata": {}, {
"outputs": [], "cell_type": "markdown",
"source": [ "metadata": {},
"aks_service.update(collect_model_data=False)" "source": [
] "## Clean up"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "code",
"source": [ "execution_count": null,
"## Clean up" "metadata": {},
] "outputs": [],
}, "source": [
{ "%%time\n",
"cell_type": "code", "aks_service.delete()\n",
"execution_count": null, "image.delete()\n",
"metadata": {}, "model.delete()"
"outputs": [], ]
"source": [ }
"%%time\n",
"aks_service.delete()\n",
"image.delete()\n",
"model.delete()"
]
}
],
"metadata": {
"authors": [
{
"name": "marthalc"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python [default]", "authors": [
"language": "python", {
"name": "python3" "name": "jocier"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -4,7 +4,7 @@ These tutorials show how to create and deploy Open Neural Network eXchange ([ONN
## Tutorials ## Tutorials
0. [Configure your Azure Machine Learning Workspace](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) 0. [Configure your Azure Machine Learning Workspace](../../../configuration.ipynb)
#### Obtain models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime Inference #### Obtain models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime Inference
1. [Handwritten Digit Classification (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb) 1. [Handwritten Digit Classification (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb)

View File

@@ -1,435 +1,435 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n", "Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# YOLO Real-time Object Detection using ONNX on AzureML\n", "# YOLO Real-time Object Detection using ONNX on AzureML\n",
"\n", "\n",
"This example shows how to convert the TinyYOLO model from CoreML to ONNX and operationalize it as a web service using Azure Machine Learning services and the ONNX Runtime.\n", "This example shows how to convert the TinyYOLO model from CoreML to ONNX and operationalize it as a web service using Azure Machine Learning services and the ONNX Runtime.\n",
"\n", "\n",
"## What is ONNX\n", "## What is ONNX\n",
"ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).\n", "ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).\n",
"\n", "\n",
"## YOLO Details\n", "## YOLO Details\n",
"You Only Look Once (YOLO) is a state-of-the-art, real-time object detection system. For more information about YOLO, please visit the [YOLO website](https://pjreddie.com/darknet/yolo/)." "You Only Look Once (YOLO) is a state-of-the-art, real-time object detection system. For more information about YOLO, please visit the [YOLO website](https://pjreddie.com/darknet/yolo/)."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites\n", "## Prerequisites\n",
"\n", "\n",
"To make the best use of your time, make sure you have done the following:\n", "To make the best use of your time, make sure you have done the following:\n",
"\n", "\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n", "* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](../00.configuration.ipynb) notebook to:\n", "* Go through the [configuration](../../../configuration.ipynb) notebook to:\n",
" * install the AML SDK\n", " * install the AML SDK\n",
" * create a workspace and its configuration file (config.json)" " * create a workspace and its configuration file (config.json)"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Check core SDK version number\n", "# Check core SDK version number\n",
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"SDK version:\", azureml.core.VERSION)" "print(\"SDK version:\", azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"#### Install necessary packages\n", "#### Install necessary packages\n",
"\n", "\n",
"You'll need to run the following commands to use this tutorial:\n", "You'll need to run the following commands to use this tutorial:\n",
"\n", "\n",
"```sh\n", "```sh\n",
"pip install onnxmltools\n", "pip install onnxmltools\n",
"pip install coremltools # use this on Linux and Mac\n", "pip install coremltools # use this on Linux and Mac\n",
"pip install git+https://github.com/apple/coremltools # use this on Windows\n", "pip install git+https://github.com/apple/coremltools # use this on Windows\n",
"```" "```"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Convert model to ONNX\n", "## Convert model to ONNX\n",
"\n", "\n",
"First we download the CoreML model. We use the CoreML model listed at https://coreml.store/tinyyolo. This may take a few minutes." "First we download the CoreML model. We use the CoreML model from [Matthijs Hollemans's tutorial](https://github.com/hollance/YOLO-CoreML-MPSNNGraph). This may take a few minutes."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import urllib.request\n", "import urllib.request\n",
"\n", "\n",
"onnx_model_url = \"https://s3-us-west-2.amazonaws.com/coreml-models/TinyYOLO.mlmodel\"\n", "coreml_model_url = \"https://github.com/hollance/YOLO-CoreML-MPSNNGraph/raw/master/TinyYOLO-CoreML/TinyYOLO-CoreML/TinyYOLO.mlmodel\"\n",
"urllib.request.urlretrieve(onnx_model_url, filename=\"TinyYOLO.mlmodel\")\n" "urllib.request.urlretrieve(coreml_model_url, filename=\"TinyYOLO.mlmodel\")\n"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Then we use ONNXMLTools to convert the model." "Then we use ONNXMLTools to convert the model."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import onnxmltools\n", "import onnxmltools\n",
"import coremltools\n", "import coremltools\n",
"\n", "\n",
"# Load a CoreML model\n", "# Load a CoreML model\n",
"coreml_model = coremltools.utils.load_spec('TinyYOLO.mlmodel')\n", "coreml_model = coremltools.utils.load_spec('TinyYOLO.mlmodel')\n",
"\n", "\n",
"# Convert from CoreML into ONNX\n", "# Convert from CoreML into ONNX\n",
"onnx_model = onnxmltools.convert_coreml(coreml_model, 'TinyYOLOv2')\n", "onnx_model = onnxmltools.convert_coreml(coreml_model, 'TinyYOLOv2')\n",
"\n", "\n",
"# Save ONNX model\n", "# Save ONNX model\n",
"onnxmltools.utils.save_model(onnx_model, 'tinyyolov2.onnx')\n", "onnxmltools.utils.save_model(onnx_model, 'tinyyolov2.onnx')\n",
"\n", "\n",
"import os\n", "import os\n",
"print(os.path.getsize('tinyyolov2.onnx'))" "print(os.path.getsize('tinyyolov2.onnx'))"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Deploying as a web service with Azure ML\n", "## Deploying as a web service with Azure ML\n",
"\n", "\n",
"### Load Azure ML workspace\n", "### Load Azure ML workspace\n",
"\n", "\n",
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook." "We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Workspace\n", "from azureml.core import Workspace\n",
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"print(ws.name, ws.location, ws.resource_group, sep = '\\n')" "print(ws.name, ws.location, ws.resource_group, sep = '\\n')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Registering your model with Azure ML\n", "### Registering your model with Azure ML\n",
"\n", "\n",
"Now we upload the model and register it in the workspace." "Now we upload the model and register it in the workspace."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.model import Model\n", "from azureml.core.model import Model\n",
"\n", "\n",
"model = Model.register(model_path = \"tinyyolov2.onnx\",\n", "model = Model.register(model_path = \"tinyyolov2.onnx\",\n",
" model_name = \"tinyyolov2\",\n", " model_name = \"tinyyolov2\",\n",
" tags = {\"onnx\": \"demo\"},\n", " tags = {\"onnx\": \"demo\"},\n",
" description = \"TinyYOLO\",\n", " description = \"TinyYOLO\",\n",
" workspace = ws)" " workspace = ws)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"#### Displaying your registered models\n", "#### Displaying your registered models\n",
"\n", "\n",
"You can optionally list out all the models that you have registered in this workspace." "You can optionally list out all the models that you have registered in this workspace."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"models = ws.models\n", "models = ws.models\n",
"for name, m in models.items():\n", "for name, m in models.items():\n",
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)" " print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Write scoring file\n", "### Write scoring file\n",
"\n", "\n",
"We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object." "We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"%%writefile score.py\n", "%%writefile score.py\n",
"import json\n", "import json\n",
"import time\n", "import time\n",
"import sys\n", "import sys\n",
"import os\n", "import os\n",
"from azureml.core.model import Model\n", "from azureml.core.model import Model\n",
"import numpy as np # we're going to use numpy to process input and output data\n", "import numpy as np # we're going to use numpy to process input and output data\n",
"import onnxruntime # to inference ONNX models, we use the ONNX Runtime\n", "import onnxruntime # to inference ONNX models, we use the ONNX Runtime\n",
"\n", "\n",
"def init():\n", "def init():\n",
" global session\n", " global session\n",
" model = Model.get_model_path(model_name = 'tinyyolov2')\n", " model = Model.get_model_path(model_name = 'tinyyolov2')\n",
" session = onnxruntime.InferenceSession(model)\n", " session = onnxruntime.InferenceSession(model)\n",
"\n", "\n",
"def preprocess(input_data_json):\n", "def preprocess(input_data_json):\n",
" # convert the JSON data into the tensor input\n", " # convert the JSON data into the tensor input\n",
" return np.array(json.loads(input_data_json)['data']).astype('float32')\n", " return np.array(json.loads(input_data_json)['data']).astype('float32')\n",
"\n", "\n",
"def postprocess(result):\n", "def postprocess(result):\n",
" return np.array(result).tolist()\n", " return np.array(result).tolist()\n",
"\n", "\n",
"def run(input_data_json):\n", "def run(input_data_json):\n",
" try:\n", " try:\n",
" start = time.time() # start timer\n", " start = time.time() # start timer\n",
" input_data = preprocess(input_data_json)\n", " input_data = preprocess(input_data_json)\n",
" input_name = session.get_inputs()[0].name # get the id of the first input of the model \n", " input_name = session.get_inputs()[0].name # get the id of the first input of the model \n",
" result = session.run([], {input_name: input_data})\n", " result = session.run([], {input_name: input_data})\n",
" end = time.time() # stop timer\n", " end = time.time() # stop timer\n",
" return {\"result\": postprocess(result),\n", " return {\"result\": postprocess(result),\n",
" \"time\": end - start}\n", " \"time\": end - start}\n",
" except Exception as e:\n", " except Exception as e:\n",
" result = str(e)\n", " result = str(e)\n",
" return {\"error\": result}" " return {\"error\": result}"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create container image\n", "### Create container image\n",
"First we create a YAML file that specifies which dependencies we would like to see in our container." "First we create a YAML file that specifies which dependencies we would like to see in our container."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.conda_dependencies import CondaDependencies \n", "from azureml.core.conda_dependencies import CondaDependencies \n",
"\n", "\n",
"myenv = CondaDependencies.create(pip_packages=[\"numpy\",\"onnxruntime\",\"azureml-core\"])\n", "myenv = CondaDependencies.create(pip_packages=[\"numpy\",\"onnxruntime\",\"azureml-core\"])\n",
"\n", "\n",
"with open(\"myenv.yml\",\"w\") as f:\n", "with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())" " f.write(myenv.serialize_to_string())"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Then we have Azure ML create the container. This step will likely take a few minutes." "Then we have Azure ML create the container. This step will likely take a few minutes."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.image import ContainerImage\n", "from azureml.core.image import ContainerImage\n",
"\n", "\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n", "image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
" runtime = \"python\",\n", " runtime = \"python\",\n",
" conda_file = \"myenv.yml\",\n", " conda_file = \"myenv.yml\",\n",
" description = \"TinyYOLO ONNX Demo\",\n", " description = \"TinyYOLO ONNX Demo\",\n",
" tags = {\"demo\": \"onnx\"}\n", " tags = {\"demo\": \"onnx\"}\n",
" )\n", " )\n",
"\n", "\n",
"\n", "\n",
"image = ContainerImage.create(name = \"onnxyolo\",\n", "image = ContainerImage.create(name = \"onnxyolo\",\n",
" models = [model],\n", " models = [model],\n",
" image_config = image_config,\n", " image_config = image_config,\n",
" workspace = ws)\n", " workspace = ws)\n",
"\n", "\n",
"image.wait_for_creation(show_output = True)" "image.wait_for_creation(show_output = True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"In case you need to debug your code, the next line of code accesses the log file." "In case you need to debug your code, the next line of code accesses the log file."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(image.image_build_log_uri)" "print(image.image_build_log_uri)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"We're all set! Let's get our model chugging.\n", "We're all set! Let's get our model chugging.\n",
"\n", "\n",
"### Deploy the container image" "### Deploy the container image"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.webservice import AciWebservice\n", "from azureml.core.webservice import AciWebservice\n",
"\n", "\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n", "aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n", " memory_gb = 1, \n",
" tags = {'demo': 'onnx'}, \n", " tags = {'demo': 'onnx'}, \n",
" description = 'web service for TinyYOLO ONNX model')" " description = 'web service for TinyYOLO ONNX model')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"The following cell will likely take a few minutes to run as well." "The following cell will likely take a few minutes to run as well."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.webservice import Webservice\n", "from azureml.core.webservice import Webservice\n",
"from random import randint\n", "from random import randint\n",
"\n", "\n",
"aci_service_name = 'onnx-tinyyolo'+str(randint(0,100))\n", "aci_service_name = 'onnx-tinyyolo'+str(randint(0,100))\n",
"print(\"Service\", aci_service_name)\n", "print(\"Service\", aci_service_name)\n",
"\n", "\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n", "aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n", " image = image,\n",
" name = aci_service_name,\n", " name = aci_service_name,\n",
" workspace = ws)\n", " workspace = ws)\n",
"\n", "\n",
"aci_service.wait_for_deployment(True)\n", "aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)" "print(aci_service.state)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again." "In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"if aci_service.state != 'Healthy':\n", "if aci_service.state != 'Healthy':\n",
" # run this command for debugging.\n", " # run this command for debugging.\n",
" print(aci_service.get_logs())\n", " print(aci_service.get_logs())\n",
" aci_service.delete()" " aci_service.delete()"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Success!\n", "## Success!\n",
"\n", "\n",
"If you've made it this far, you've deployed a working web service that does object detection using an ONNX model. You can get the URL for the webservice with the code below." "If you've made it this far, you've deployed a working web service that does object detection using an ONNX model. You can get the URL for the webservice with the code below."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(aci_service.scoring_uri)" "print(aci_service.scoring_uri)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"When you are eventually done using the web service, remember to delete it." "When you are eventually done using the web service, remember to delete it."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"#aci_service.delete()" "#aci_service.delete()"
] ]
} }
],
"metadata": {
"authors": [
{
"name": "onnx"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "viswamy"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,419 +1,419 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n", "Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# ResNet50 Image Classification using ONNX and AzureML\n", "# ResNet50 Image Classification using ONNX and AzureML\n",
"\n", "\n",
"This example shows how to deploy the ResNet50 ONNX model as a web service using Azure Machine Learning services and the ONNX Runtime.\n", "This example shows how to deploy the ResNet50 ONNX model as a web service using Azure Machine Learning services and the ONNX Runtime.\n",
"\n", "\n",
"## What is ONNX\n", "## What is ONNX\n",
"ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).\n", "ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).\n",
"\n", "\n",
"## ResNet50 Details\n", "## ResNet50 Details\n",
"ResNet classifies the major object in an input image into a set of 1000 pre-defined classes. For more information about the ResNet50 model and how it was created can be found on the [ONNX Model Zoo github](https://github.com/onnx/models/tree/master/models/image_classification/resnet). " "ResNet classifies the major object in an input image into a set of 1000 pre-defined classes. For more information about the ResNet50 model and how it was created can be found on the [ONNX Model Zoo github](https://github.com/onnx/models/tree/master/models/image_classification/resnet). "
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites\n", "## Prerequisites\n",
"\n", "\n",
"To make the best use of your time, make sure you have done the following:\n", "To make the best use of your time, make sure you have done the following:\n",
"\n", "\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n", "* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](../00.configuration.ipynb) notebook to:\n", "* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n", " * install the AML SDK\n",
" * create a workspace and its configuration file (config.json)" " * create a workspace and its configuration file (config.json)"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Check core SDK version number\n", "# Check core SDK version number\n",
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"SDK version:\", azureml.core.VERSION)" "print(\"SDK version:\", azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"#### Download pre-trained ONNX model from ONNX Model Zoo.\n", "#### Download pre-trained ONNX model from ONNX Model Zoo.\n",
"\n", "\n",
"Download the [ResNet50v2 model and test data](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz) and extract it in the same folder as this tutorial notebook.\n" "Download the [ResNet50v2 model and test data](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz) and extract it in the same folder as this tutorial notebook.\n"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import urllib.request\n", "import urllib.request\n",
"\n", "\n",
"onnx_model_url = \"https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz\"\n", "onnx_model_url = \"https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz\"\n",
"urllib.request.urlretrieve(onnx_model_url, filename=\"resnet50v2.tar.gz\")\n", "urllib.request.urlretrieve(onnx_model_url, filename=\"resnet50v2.tar.gz\")\n",
"\n", "\n",
"!tar xvzf resnet50v2.tar.gz" "!tar xvzf resnet50v2.tar.gz"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Deploying as a web service with Azure ML" "## Deploying as a web service with Azure ML"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Load your Azure ML workspace\n", "### Load your Azure ML workspace\n",
"\n", "\n",
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook." "We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Workspace\n", "from azureml.core import Workspace\n",
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"print(ws.name, ws.location, ws.resource_group, sep = '\\n')" "print(ws.name, ws.location, ws.resource_group, sep = '\\n')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Register your model with Azure ML\n", "### Register your model with Azure ML\n",
"\n", "\n",
"Now we upload the model and register it in the workspace." "Now we upload the model and register it in the workspace."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.model import Model\n", "from azureml.core.model import Model\n",
"\n", "\n",
"model = Model.register(model_path = \"resnet50v2/resnet50v2.onnx\",\n", "model = Model.register(model_path = \"resnet50v2/resnet50v2.onnx\",\n",
" model_name = \"resnet50v2\",\n", " model_name = \"resnet50v2\",\n",
" tags = {\"onnx\": \"demo\"},\n", " tags = {\"onnx\": \"demo\"},\n",
" description = \"ResNet50v2 from ONNX Model Zoo\",\n", " description = \"ResNet50v2 from ONNX Model Zoo\",\n",
" workspace = ws)" " workspace = ws)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"#### Displaying your registered models\n", "#### Displaying your registered models\n",
"\n", "\n",
"You can optionally list out all the models that you have registered in this workspace." "You can optionally list out all the models that you have registered in this workspace."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"models = ws.models\n", "models = ws.models\n",
"for name, m in models.items():\n", "for name, m in models.items():\n",
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)" " print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Write scoring file\n", "### Write scoring file\n",
"\n", "\n",
"We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object." "We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"%%writefile score.py\n", "%%writefile score.py\n",
"import json\n", "import json\n",
"import time\n", "import time\n",
"import sys\n", "import sys\n",
"import os\n", "import os\n",
"from azureml.core.model import Model\n", "from azureml.core.model import Model\n",
"import numpy as np # we're going to use numpy to process input and output data\n", "import numpy as np # we're going to use numpy to process input and output data\n",
"import onnxruntime # to inference ONNX models, we use the ONNX Runtime\n", "import onnxruntime # to inference ONNX models, we use the ONNX Runtime\n",
"\n", "\n",
"def softmax(x):\n", "def softmax(x):\n",
" x = x.reshape(-1)\n", " x = x.reshape(-1)\n",
" e_x = np.exp(x - np.max(x))\n", " e_x = np.exp(x - np.max(x))\n",
" return e_x / e_x.sum(axis=0)\n", " return e_x / e_x.sum(axis=0)\n",
"\n", "\n",
"def init():\n", "def init():\n",
" global session\n", " global session\n",
" model = Model.get_model_path(model_name = 'resnet50v2')\n", " model = Model.get_model_path(model_name = 'resnet50v2')\n",
" session = onnxruntime.InferenceSession(model, None)\n", " session = onnxruntime.InferenceSession(model, None)\n",
"\n", "\n",
"def preprocess(input_data_json):\n", "def preprocess(input_data_json):\n",
" # convert the JSON data into the tensor input\n", " # convert the JSON data into the tensor input\n",
" img_data = np.array(json.loads(input_data_json)['data']).astype('float32')\n", " img_data = np.array(json.loads(input_data_json)['data']).astype('float32')\n",
" \n", " \n",
" #normalize\n", " #normalize\n",
" mean_vec = np.array([0.485, 0.456, 0.406])\n", " mean_vec = np.array([0.485, 0.456, 0.406])\n",
" stddev_vec = np.array([0.229, 0.224, 0.225])\n", " stddev_vec = np.array([0.229, 0.224, 0.225])\n",
" norm_img_data = np.zeros(img_data.shape).astype('float32')\n", " norm_img_data = np.zeros(img_data.shape).astype('float32')\n",
" for i in range(img_data.shape[0]):\n", " for i in range(img_data.shape[0]):\n",
" norm_img_data[i,:,:] = (img_data[i,:,:]/255 - mean_vec[i]) / stddev_vec[i]\n", " norm_img_data[i,:,:] = (img_data[i,:,:]/255 - mean_vec[i]) / stddev_vec[i]\n",
"\n", "\n",
" return norm_img_data\n", " return norm_img_data\n",
"\n", "\n",
"def postprocess(result):\n", "def postprocess(result):\n",
" return softmax(np.array(result)).tolist()\n", " return softmax(np.array(result)).tolist()\n",
"\n", "\n",
"def run(input_data_json):\n", "def run(input_data_json):\n",
" try:\n", " try:\n",
" start = time.time()\n", " start = time.time()\n",
" # load in our data which is expected as NCHW 224x224 image\n", " # load in our data which is expected as NCHW 224x224 image\n",
" input_data = preprocess(input_data_json)\n", " input_data = preprocess(input_data_json)\n",
" input_name = session.get_inputs()[0].name # get the id of the first input of the model \n", " input_name = session.get_inputs()[0].name # get the id of the first input of the model \n",
" result = session.run([], {input_name: input_data})\n", " result = session.run([], {input_name: input_data})\n",
" end = time.time() # stop timer\n", " end = time.time() # stop timer\n",
" return {\"result\": postprocess(result),\n", " return {\"result\": postprocess(result),\n",
" \"time\": end - start}\n", " \"time\": end - start}\n",
" except Exception as e:\n", " except Exception as e:\n",
" result = str(e)\n", " result = str(e)\n",
" return {\"error\": result}" " return {\"error\": result}"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create container image" "### Create container image"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"First we create a YAML file that specifies which dependencies we would like to see in our container." "First we create a YAML file that specifies which dependencies we would like to see in our container."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.conda_dependencies import CondaDependencies \n", "from azureml.core.conda_dependencies import CondaDependencies \n",
"\n", "\n",
"myenv = CondaDependencies.create(pip_packages=[\"numpy\",\"onnxruntime\",\"azureml-core\"])\n", "myenv = CondaDependencies.create(pip_packages=[\"numpy\",\"onnxruntime\",\"azureml-core\"])\n",
"\n", "\n",
"with open(\"myenv.yml\",\"w\") as f:\n", "with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())" " f.write(myenv.serialize_to_string())"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Then we have Azure ML create the container. This step will likely take a few minutes." "Then we have Azure ML create the container. This step will likely take a few minutes."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.image import ContainerImage\n", "from azureml.core.image import ContainerImage\n",
"\n", "\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n", "image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
" runtime = \"python\",\n", " runtime = \"python\",\n",
" conda_file = \"myenv.yml\",\n", " conda_file = \"myenv.yml\",\n",
" description = \"ONNX ResNet50 Demo\",\n", " description = \"ONNX ResNet50 Demo\",\n",
" tags = {\"demo\": \"onnx\"}\n", " tags = {\"demo\": \"onnx\"}\n",
" )\n", " )\n",
"\n", "\n",
"\n", "\n",
"image = ContainerImage.create(name = \"onnxresnet50v2\",\n", "image = ContainerImage.create(name = \"onnxresnet50v2\",\n",
" models = [model],\n", " models = [model],\n",
" image_config = image_config,\n", " image_config = image_config,\n",
" workspace = ws)\n", " workspace = ws)\n",
"\n", "\n",
"image.wait_for_creation(show_output = True)" "image.wait_for_creation(show_output = True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"In case you need to debug your code, the next line of code accesses the log file." "In case you need to debug your code, the next line of code accesses the log file."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(image.image_build_log_uri)" "print(image.image_build_log_uri)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"We're all set! Let's get our model chugging.\n", "We're all set! Let's get our model chugging.\n",
"\n", "\n",
"### Deploy the container image" "### Deploy the container image"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.webservice import AciWebservice\n", "from azureml.core.webservice import AciWebservice\n",
"\n", "\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n", "aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n", " memory_gb = 1, \n",
" tags = {'demo': 'onnx'}, \n", " tags = {'demo': 'onnx'}, \n",
" description = 'web service for ResNet50 ONNX model')" " description = 'web service for ResNet50 ONNX model')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"The following cell will likely take a few minutes to run as well." "The following cell will likely take a few minutes to run as well."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.webservice import Webservice\n", "from azureml.core.webservice import Webservice\n",
"from random import randint\n", "from random import randint\n",
"\n", "\n",
"aci_service_name = 'onnx-demo-resnet50'+str(randint(0,100))\n", "aci_service_name = 'onnx-demo-resnet50'+str(randint(0,100))\n",
"print(\"Service\", aci_service_name)\n", "print(\"Service\", aci_service_name)\n",
"\n", "\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n", "aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n", " image = image,\n",
" name = aci_service_name,\n", " name = aci_service_name,\n",
" workspace = ws)\n", " workspace = ws)\n",
"\n", "\n",
"aci_service.wait_for_deployment(True)\n", "aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)" "print(aci_service.state)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again." "In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"if aci_service.state != 'Healthy':\n", "if aci_service.state != 'Healthy':\n",
" # run this command for debugging.\n", " # run this command for debugging.\n",
" print(aci_service.get_logs())\n", " print(aci_service.get_logs())\n",
" aci_service.delete()" " aci_service.delete()"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Success!\n", "## Success!\n",
"\n", "\n",
"If you've made it this far, you've deployed a working web service that does image classification using an ONNX model. You can get the URL for the webservice with the code below." "If you've made it this far, you've deployed a working web service that does image classification using an ONNX model. You can get the URL for the webservice with the code below."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(aci_service.scoring_uri)" "print(aci_service.scoring_uri)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"When you are eventually done using the web service, remember to delete it." "When you are eventually done using the web service, remember to delete it."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"#aci_service.delete()" "#aci_service.delete()"
] ]
} }
],
"metadata": {
"authors": [
{
"name": "onnx"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "viswamy"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,343 +1,343 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Deploying a web service to Azure Kubernetes Service (AKS)\n", "# Deploying a web service to Azure Kubernetes Service (AKS)\n",
"This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it. \n", "This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it. \n",
"We then test and delete the service, image and model." "We then test and delete the service, image and model."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Workspace\n", "from azureml.core import Workspace\n",
"from azureml.core.compute import AksCompute, ComputeTarget\n", "from azureml.core.compute import AksCompute, ComputeTarget\n",
"from azureml.core.webservice import Webservice, AksWebservice\n", "from azureml.core.webservice import Webservice, AksWebservice\n",
"from azureml.core.image import Image\n", "from azureml.core.image import Image\n",
"from azureml.core.model import Model" "from azureml.core.model import Model"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import azureml.core\n", "import azureml.core\n",
"print(azureml.core.VERSION)" "print(azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Get workspace\n", "# Get workspace\n",
"Load existing workspace from the config file info." "Load existing workspace from the config file info."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.workspace import Workspace\n", "from azureml.core.workspace import Workspace\n",
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')" "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Register the model\n", "# Register the model\n",
"Register an existing trained model, add descirption and tags." "Register an existing trained model, add descirption and tags."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"#Register the model\n", "#Register the model\n",
"from azureml.core.model import Model\n", "from azureml.core.model import Model\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n", "model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
" model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n", " model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n", " tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" description = \"Ridge regression model to predict diabetes\",\n", " description = \"Ridge regression model to predict diabetes\",\n",
" workspace = ws)\n", " workspace = ws)\n",
"\n", "\n",
"print(model.name, model.description, model.version)" "print(model.name, model.description, model.version)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Create an image\n", "# Create an image\n",
"Create an image using the registered model the script that will load and run the model." "Create an image using the registered model the script that will load and run the model."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"%%writefile score.py\n", "%%writefile score.py\n",
"import pickle\n", "import pickle\n",
"import json\n", "import json\n",
"import numpy\n", "import numpy\n",
"from sklearn.externals import joblib\n", "from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n", "from sklearn.linear_model import Ridge\n",
"from azureml.core.model import Model\n", "from azureml.core.model import Model\n",
"\n", "\n",
"def init():\n", "def init():\n",
" global model\n", " global model\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under\n", " # note here \"sklearn_regression_model.pkl\" is the name of the model registered under\n",
" # this is a different behavior than before when the code is run locally, even though the code is the same.\n", " # this is a different behavior than before when the code is run locally, even though the code is the same.\n",
" model_path = Model.get_model_path('sklearn_regression_model.pkl')\n", " model_path = Model.get_model_path('sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n", " # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n", " model = joblib.load(model_path)\n",
"\n", "\n",
"# note you can pass in multiple rows for scoring\n", "# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n", "def run(raw_data):\n",
" try:\n", " try:\n",
" data = json.loads(raw_data)['data']\n", " data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n", " data = numpy.array(data)\n",
" result = model.predict(data)\n", " result = model.predict(data)\n",
" # you can return any data type as long as it is JSON-serializable\n", " # you can return any data type as long as it is JSON-serializable\n",
" return result.tolist()\n", " return result.tolist()\n",
" except Exception as e:\n", " except Exception as e:\n",
" error = str(e)\n", " error = str(e)\n",
" return error" " return error"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.conda_dependencies import CondaDependencies \n", "from azureml.core.conda_dependencies import CondaDependencies \n",
"\n", "\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n", "myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"\n", "\n",
"with open(\"myenv.yml\",\"w\") as f:\n", "with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())" " f.write(myenv.serialize_to_string())"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.image import ContainerImage\n", "from azureml.core.image import ContainerImage\n",
"\n", "\n",
"image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n", "image_config = ContainerImage.image_configuration(execution_script = \"score.py\",\n",
" runtime = \"python\",\n", " runtime = \"python\",\n",
" conda_file = \"myenv.yml\",\n", " conda_file = \"myenv.yml\",\n",
" description = \"Image with ridge regression model\",\n", " description = \"Image with ridge regression model\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}\n", " tags = {'area': \"diabetes\", 'type': \"regression\"}\n",
" )\n", " )\n",
"\n", "\n",
"image = ContainerImage.create(name = \"myimage1\",\n", "image = ContainerImage.create(name = \"myimage1\",\n",
" # this is the model object\n", " # this is the model object\n",
" models = [model],\n", " models = [model],\n",
" image_config = image_config,\n", " image_config = image_config,\n",
" workspace = ws)\n", " workspace = ws)\n",
"\n", "\n",
"image.wait_for_creation(show_output = True)" "image.wait_for_creation(show_output = True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Provision the AKS Cluster\n", "# Provision the AKS Cluster\n",
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it." "This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Use the default configuration (can also provide parameters to customize)\n", "# Use the default configuration (can also provide parameters to customize)\n",
"prov_config = AksCompute.provisioning_configuration()\n", "prov_config = AksCompute.provisioning_configuration()\n",
"\n", "\n",
"aks_name = 'my-aks-9' \n", "aks_name = 'my-aks-9' \n",
"# Create the cluster\n", "# Create the cluster\n",
"aks_target = ComputeTarget.create(workspace = ws, \n", "aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n", " name = aks_name, \n",
" provisioning_configuration = prov_config)" " provisioning_configuration = prov_config)"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"%%time\n", "%%time\n",
"aks_target.wait_for_completion(show_output = True)\n", "aks_target.wait_for_completion(show_output = True)\n",
"print(aks_target.provisioning_state)\n", "print(aks_target.provisioning_state)\n",
"print(aks_target.provisioning_errors)" "print(aks_target.provisioning_errors)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Optional step: Attach existing AKS cluster\n", "## Optional step: Attach existing AKS cluster\n",
"\n", "\n",
"If you have existing AKS cluster in your Azure subscription, you can attach it to the Workspace." "If you have existing AKS cluster in your Azure subscription, you can attach it to the Workspace."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"'''\n", "'''\n",
"# Use the default configuration (can also provide parameters to customize)\n", "# Use the default configuration (can also provide parameters to customize)\n",
"resource_id = '/subscriptions/92c76a2f-0e1c-4216-b65e-abf7a3f34c1e/resourcegroups/raymondsdk0604/providers/Microsoft.ContainerService/managedClusters/my-aks-0605d37425356b7d01'\n", "resource_id = '/subscriptions/92c76a2f-0e1c-4216-b65e-abf7a3f34c1e/resourcegroups/raymondsdk0604/providers/Microsoft.ContainerService/managedClusters/my-aks-0605d37425356b7d01'\n",
"\n", "\n",
"create_name='my-existing-aks' \n", "create_name='my-existing-aks' \n",
"# Create the cluster\n", "# Create the cluster\n",
"attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n", "attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
"aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config)\n", "aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config)\n",
"# Wait for the operation to complete\n", "# Wait for the operation to complete\n",
"aks_target.wait_for_completion(True)\n", "aks_target.wait_for_completion(True)\n",
"'''" "'''"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Deploy web service to AKS" "# Deploy web service to AKS"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"#Set the web service configuration (using default here)\n", "#Set the web service configuration (using default here)\n",
"aks_config = AksWebservice.deploy_configuration()" "aks_config = AksWebservice.deploy_configuration()"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"%%time\n", "%%time\n",
"aks_service_name ='aks-service-1'\n", "aks_service_name ='aks-service-1'\n",
"\n", "\n",
"aks_service = Webservice.deploy_from_image(workspace = ws, \n", "aks_service = Webservice.deploy_from_image(workspace = ws, \n",
" name = aks_service_name,\n", " name = aks_service_name,\n",
" image = image,\n", " image = image,\n",
" deployment_config = aks_config,\n", " deployment_config = aks_config,\n",
" deployment_target = aks_target)\n", " deployment_target = aks_target)\n",
"aks_service.wait_for_deployment(show_output = True)\n", "aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)" "print(aks_service.state)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Test the web service\n", "# Test the web service\n",
"We test the web sevice by passing data." "We test the web sevice by passing data."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"%%time\n", "%%time\n",
"import json\n", "import json\n",
"\n", "\n",
"test_sample = json.dumps({'data': [\n", "test_sample = json.dumps({'data': [\n",
" [1,2,3,4,5,6,7,8,9,10], \n", " [1,2,3,4,5,6,7,8,9,10], \n",
" [10,9,8,7,6,5,4,3,2,1]\n", " [10,9,8,7,6,5,4,3,2,1]\n",
"]})\n", "]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n", "test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n", "\n",
"prediction = aks_service.run(input_data = test_sample)\n", "prediction = aks_service.run(input_data = test_sample)\n",
"print(prediction)" "print(prediction)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Clean up\n", "# Clean up\n",
"Delete the service, image and model." "Delete the service, image and model."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"%%time\n", "%%time\n",
"aks_service.delete()\n", "aks_service.delete()\n",
"image.delete()\n", "image.delete()\n",
"model.delete()" "model.delete()"
] ]
} }
],
"metadata": {
"authors": [
{
"name": "raymondl"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "raymondl"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,420 +1,421 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 10. Register Model, Create Image and Deploy Service\n", "## Register Model, Create Image and Deploy Service\n",
"\n", "\n",
"This example shows how to deploy a web service in step-by-step fashion:\n", "This example shows how to deploy a web service in step-by-step fashion:\n",
"\n", "\n",
" 1. Register model\n", " 1. Register model\n",
" 2. Query versions of models and select one to deploy\n", " 2. Query versions of models and select one to deploy\n",
" 3. Create Docker image\n", " 3. Create Docker image\n",
" 4. Query versions of images\n", " 4. Query versions of images\n",
" 5. Deploy the image as web service\n", " 5. Deploy the image as web service\n",
" \n", " \n",
"**IMPORTANT**:\n", "**IMPORTANT**:\n",
" * This notebook requires you to first complete \"01.SDK-101-Train-and-Deploy-to-ACI.ipynb\" Notebook\n", " * This notebook requires you to first complete [train-within-notebook](../../training/train-within-notebook/train-within-notebook.ipynb) example\n",
" \n", " \n",
"The 101 Notebook taught you how to deploy a web service directly from model in one step. This Notebook shows a more advanced approach that gives you more control over model versions and Docker image versions. " "The train-within-notebook example taught you how to deploy a web service directly from model in one step. This Notebook shows a more advanced approach that gives you more control over model versions and Docker image versions. "
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites\n", "## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't." "Make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Check core SDK version number\n", "# Check core SDK version number\n",
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"SDK version:\", azureml.core.VERSION)" "print(\"SDK version:\", azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Initialize Workspace\n", "## Initialize Workspace\n",
"\n", "\n",
"Initialize a workspace object from persisted configuration." "Initialize a workspace object from persisted configuration."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": { "metadata": {
"tags": [ "tags": [
"create workspace" "create workspace"
] ]
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Workspace\n", "from azureml.core import Workspace\n",
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')" "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Register Model" "### Register Model"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"You can add tags and descriptions to your models. Note you need to have a `sklearn_linreg_model.pkl` file in the current directory. This file is generated by the 01 notebook. The below call registers that file as a model with the same name `sklearn_linreg_model.pkl` in the workspace.\n", "You can add tags and descriptions to your models. Note you need to have a `sklearn_linreg_model.pkl` file in the current directory. This file is generated by the 01 notebook. The below call registers that file as a model with the same name `sklearn_linreg_model.pkl` in the workspace.\n",
"\n", "\n",
"Using tags, you can track useful information such as the name and version of the machine learning library used to train the model. Note that tags must be alphanumeric." "Using tags, you can track useful information such as the name and version of the machine learning library used to train the model. Note that tags must be alphanumeric."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": { "metadata": {
"tags": [ "tags": [
"register model from file" "register model from file"
] ]
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.model import Model\n", "from azureml.core.model import Model\n",
"import sklearn\n", "import sklearn\n",
"\n", "\n",
"library_version = \"sklearn\"+sklearn.__version__.replace(\".\",\"x\")\n", "library_version = \"sklearn\"+sklearn.__version__.replace(\".\",\"x\")\n",
"\n", "\n",
"model = Model.register(model_path = \"sklearn_regression_model.pkl\",\n", "model = Model.register(model_path = \"sklearn_regression_model.pkl\",\n",
" model_name = \"sklearn_regression_model.pkl\",\n", " model_name = \"sklearn_regression_model.pkl\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\", 'version': library_version},\n", " tags = {'area': \"diabetes\", 'type': \"regression\", 'version': library_version},\n",
" description = \"Ridge regression model to predict diabetes\",\n", " description = \"Ridge regression model to predict diabetes\",\n",
" workspace = ws)" " workspace = ws)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"You can explore the registered models within your workspace and query by tag. Models are versioned. If you call the register_model command many times with same model name, you will get multiple versions of the model with increasing version numbers." "You can explore the registered models within your workspace and query by tag. Models are versioned. If you call the register_model command many times with same model name, you will get multiple versions of the model with increasing version numbers."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": { "metadata": {
"tags": [ "tags": [
"register model from file" "register model from file"
] ]
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"regression_models = Model.list(workspace=ws, tags=['area'])\n", "regression_models = Model.list(workspace=ws, tags=['area'])\n",
"for m in regression_models:\n", "for m in regression_models:\n",
" print(\"Name:\", m.name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)" " print(\"Name:\", m.name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"You can pick a specific model to deploy" "You can pick a specific model to deploy"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(model.name, model.description, model.version, sep = '\\t')" "print(model.name, model.description, model.version, sep = '\\t')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create Docker Image" "### Create Docker Image"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Show `score.py`. Note that the `sklearn_regression_model.pkl` in the `get_model_path` call is referring to a model named `sklearn_linreg_model.pkl` registered under the workspace. It is NOT referenceing the local file." "Show `score.py`. Note that the `sklearn_regression_model.pkl` in the `get_model_path` call is referring to a model named `sklearn_linreg_model.pkl` registered under the workspace. It is NOT referenceing the local file."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"%%writefile score.py\n", "%%writefile score.py\n",
"import pickle\n", "import pickle\n",
"import json\n", "import json\n",
"import numpy\n", "import numpy\n",
"from sklearn.externals import joblib\n", "from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n", "from sklearn.linear_model import Ridge\n",
"from azureml.core.model import Model\n", "from azureml.core.model import Model\n",
"\n", "\n",
"def init():\n", "def init():\n",
" global model\n", " global model\n",
" # note here \"sklearn_regression_model.pkl\" is the name of the model registered under\n", " # note here \"sklearn_regression_model.pkl\" is the name of the model registered under\n",
" # this is a different behavior than before when the code is run locally, even though the code is the same.\n", " # this is a different behavior than before when the code is run locally, even though the code is the same.\n",
" model_path = Model.get_model_path('sklearn_regression_model.pkl')\n", " model_path = Model.get_model_path('sklearn_regression_model.pkl')\n",
" # deserialize the model file back into a sklearn model\n", " # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n", " model = joblib.load(model_path)\n",
"\n", "\n",
"# note you can pass in multiple rows for scoring\n", "# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n", "def run(raw_data):\n",
" try:\n", " try:\n",
" data = json.loads(raw_data)['data']\n", " data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n", " data = numpy.array(data)\n",
" result = model.predict(data)\n", " result = model.predict(data)\n",
" # you can return any datatype as long as it is JSON-serializable\n", " # you can return any datatype as long as it is JSON-serializable\n",
" return result.tolist()\n", " return result.tolist()\n",
" except Exception as e:\n", " except Exception as e:\n",
" error = str(e)\n", " error = str(e)\n",
" return error" " return error"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.conda_dependencies import CondaDependencies \n", "from azureml.core.conda_dependencies import CondaDependencies \n",
"\n", "\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n", "myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'])\n",
"\n", "\n",
"with open(\"myenv.yml\",\"w\") as f:\n", "with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())" " f.write(myenv.serialize_to_string())"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Note that following command can take few minutes. \n", "Note that following command can take few minutes. \n",
"\n", "\n",
"You can add tags and descriptions to images. Also, an image can contain multiple models." "You can add tags and descriptions to images. Also, an image can contain multiple models."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": { "metadata": {
"tags": [ "tags": [
"create image" "create image"
] ]
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.image import Image, ContainerImage\n", "from azureml.core.image import Image, ContainerImage\n",
"\n", "\n",
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n", "image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
" execution_script=\"score.py\",\n", " execution_script=\"score.py\",\n",
" conda_file=\"myenv.yml\",\n", " conda_file=\"myenv.yml\",\n",
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n", " tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
" description = \"Image with ridge regression model\")\n", " description = \"Image with ridge regression model\")\n",
"\n", "\n",
"image = Image.create(name = \"myimage1\",\n", "image = Image.create(name = \"myimage1\",\n",
" # this is the model object \n", " # this is the model object. note you can pass in 0-n models via this list-type parameter\n",
" models = [model],\n", " # in case you need to reference multiple models, or none at all, in your scoring script.\n",
" image_config = image_config, \n", " models = [model],\n",
" workspace = ws)" " image_config = image_config, \n",
] " workspace = ws)"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": { "execution_count": null,
"tags": [ "metadata": {
"create image" "tags": [
] "create image"
}, ]
"outputs": [], },
"source": [ "outputs": [],
"image.wait_for_creation(show_output = True)" "source": [
] "image.wait_for_creation(show_output = True)"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"List images by tag and find out the detailed build log for debugging." "source": [
] "List images by tag and find out the detailed build log for debugging."
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": { "execution_count": null,
"tags": [ "metadata": {
"create image" "tags": [
] "create image"
}, ]
"outputs": [], },
"source": [ "outputs": [],
"for i in Image.list(workspace = ws,tags = [\"area\"]):\n", "source": [
" print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))" "for i in Image.list(workspace = ws,tags = [\"area\"]):\n",
] " print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"### Deploy image as web service on Azure Container Instance\n", "source": [
"\n", "### Deploy image as web service on Azure Container Instance\n",
"Note that the service creation can take few minutes." "\n",
] "Note that the service creation can take few minutes."
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": { "execution_count": null,
"tags": [ "metadata": {
"deploy service", "tags": [
"aci" "deploy service",
] "aci"
}, ]
"outputs": [], },
"source": [ "outputs": [],
"from azureml.core.webservice import AciWebservice\n", "source": [
"\n", "from azureml.core.webservice import AciWebservice\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n", "\n",
" memory_gb = 1, \n", "aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}, \n", " memory_gb = 1, \n",
" description = 'Predict diabetes using regression model')" " tags = {'area': \"diabetes\", 'type': \"regression\"}, \n",
] " description = 'Predict diabetes using regression model')"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": { "execution_count": null,
"tags": [ "metadata": {
"deploy service", "tags": [
"aci" "deploy service",
] "aci"
}, ]
"outputs": [], },
"source": [ "outputs": [],
"from azureml.core.webservice import Webservice\n", "source": [
"\n", "from azureml.core.webservice import Webservice\n",
"aci_service_name = 'my-aci-service-2'\n", "\n",
"print(aci_service_name)\n", "aci_service_name = 'my-aci-service-2'\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n", "print(aci_service_name)\n",
" image = image,\n", "aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" name = aci_service_name,\n", " image = image,\n",
" workspace = ws)\n", " name = aci_service_name,\n",
"aci_service.wait_for_deployment(True)\n", " workspace = ws)\n",
"print(aci_service.state)" "aci_service.wait_for_deployment(True)\n",
] "print(aci_service.state)"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"### Test web service" "source": [
] "### Test web service"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"Call the web service with some dummy input data to get a prediction." "source": [
] "Call the web service with some dummy input data to get a prediction."
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": { "execution_count": null,
"tags": [ "metadata": {
"deploy service", "tags": [
"aci" "deploy service",
] "aci"
}, ]
"outputs": [], },
"source": [ "outputs": [],
"import json\n", "source": [
"\n", "import json\n",
"test_sample = json.dumps({'data': [\n", "\n",
" [1,2,3,4,5,6,7,8,9,10], \n", "test_sample = json.dumps({'data': [\n",
" [10,9,8,7,6,5,4,3,2,1]\n", " [1,2,3,4,5,6,7,8,9,10], \n",
"]})\n", " [10,9,8,7,6,5,4,3,2,1]\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n", "]})\n",
"\n", "test_sample = bytes(test_sample,encoding = 'utf8')\n",
"prediction = aci_service.run(input_data=test_sample)\n", "\n",
"print(prediction)" "prediction = aci_service.run(input_data=test_sample)\n",
] "print(prediction)"
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"### Delete ACI to clean up" "source": [
] "### Delete ACI to clean up"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": { "execution_count": null,
"tags": [ "metadata": {
"deploy service", "tags": [
"aci" "deploy service",
] "aci"
}, ]
"outputs": [], },
"source": [ "outputs": [],
"aci_service.delete()" "source": [
] "aci_service.delete()"
} ]
], }
"metadata": {
"authors": [
{
"name": "raymondl"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "raymondl"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -44,6 +44,9 @@ In this directory, there are two types of notebooks:
4. [aml-pipelines-data-transfer.ipynb](https://aka.ms/pl-data-trans) 4. [aml-pipelines-data-transfer.ipynb](https://aka.ms/pl-data-trans)
5. [aml-pipelines-use-databricks-as-compute-target.ipynb](https://aka.ms/pl-databricks) 5. [aml-pipelines-use-databricks-as-compute-target.ipynb](https://aka.ms/pl-databricks)
6. [aml-pipelines-use-adla-as-compute-target.ipynb](https://aka.ms/pl-adla) 6. [aml-pipelines-use-adla-as-compute-target.ipynb](https://aka.ms/pl-adla)
7. [aml-pipelines-parameter-tuning-with-hyperdrive.ipynb](https://aka.ms/pl-hyperdrive)
8. [aml-pipelines-how-to-use-azurebatch-to-run-a-windows-executable.ipynb](https://aka.ms/pl-azbatch)
9. [aml-pipelines-setup-schedule-for-a-published-pipeline.ipynb](https://aka.ms/pl-schedule)
* The second type of notebooks illustrate more sophisticated scenarios, and are independent of each other. These notebooks include: * The second type of notebooks illustrate more sophisticated scenarios, and are independent of each other. These notebooks include:

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -1,332 +1,469 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n", "Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Azure Machine Learning Pipeline with DataTranferStep\n", "# Azure Machine Learning Pipeline with DataTranferStep\n",
"This notebook is used to demonstrate the use of DataTranferStep in Azure Machine Learning Pipeline.\n", "This notebook is used to demonstrate the use of DataTranferStep in Azure Machine Learning Pipeline.\n",
"\n", "\n",
"In certain cases, you will need to transfer data from one data location to another. For example, your data may be in Files storage and you may want to move it to Blob storage. Or, if your data is in an ADLS account and you want to make it available in the Blob storage. The built-in **DataTransferStep** class helps you transfer data in these situations.\n", "In certain cases, you will need to transfer data from one data location to another. For example, your data may be in Files storage and you may want to move it to Blob storage. Or, if your data is in an ADLS account and you want to make it available in the Blob storage. The built-in **DataTransferStep** class helps you transfer data in these situations.\n",
"\n", "\n",
"The below example shows how to move data in an ADLS account to Blob storage." "The below example shows how to move data between an ADLS account, Blob storage, SQL Server, PostgreSQL server. "
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Azure Machine Learning and Pipeline SDK-specific imports" "## Azure Machine Learning and Pipeline SDK-specific imports"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import os\n", "import os\n",
"import azureml.core\n", "import azureml.core\n",
"from azureml.core.compute import ComputeTarget, DatabricksCompute, DataFactoryCompute\n", "from azureml.core.compute import ComputeTarget, DataFactoryCompute\n",
"from azureml.exceptions import ComputeTargetException\n", "from azureml.exceptions import ComputeTargetException\n",
"from azureml.core import Workspace, Run, Experiment\n", "from azureml.core import Workspace, Experiment\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n", "from azureml.pipeline.core import Pipeline\n",
"from azureml.pipeline.steps import AdlaStep\n", "from azureml.core.datastore import Datastore\n",
"from azureml.core.datastore import Datastore\n", "from azureml.data.data_reference import DataReference\n",
"from azureml.data.data_reference import DataReference\n", "from azureml.pipeline.steps import DataTransferStep\n",
"from azureml.data.sql_data_reference import SqlDataReference\n", "\n",
"from azureml.core import attach_legacy_compute_target\n", "# Check core SDK version number\n",
"from azureml.data.stored_procedure_parameter import StoredProcedureParameter, StoredProcedureParameterType\n", "print(\"SDK version:\", azureml.core.VERSION)"
"from azureml.pipeline.steps import DataTransferStep\n", ]
"\n", },
"# Check core SDK version number\n", {
"print(\"SDK version:\", azureml.core.VERSION)" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Initialize Workspace\n",
"cell_type": "markdown", "\n",
"metadata": {}, "Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json\n",
"source": [ "\n",
"## Initialize Workspace\n", "If you don't have a config.json file, please go through the configuration Notebook located here:\n",
"\n", "https://github.com/Azure/MachineLearningNotebooks. \n",
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json\n", "\n",
"\n", "This sets you up with a working config file that has information on your workspace, subscription id, etc. "
"If you don't have a config.json file, please go through the configuration Notebook located here:\n", ]
"https://github.com/Azure/MachineLearningNotebooks. \n", },
"\n", {
"This sets you up with a working config file that has information on your workspace, subscription id, etc. " "cell_type": "code",
] "execution_count": null,
}, "metadata": {
{ "tags": [
"cell_type": "code", "create workspace"
"execution_count": null, ]
"metadata": { },
"tags": [ "outputs": [],
"create workspace" "source": [
] "ws = Workspace.from_config()\n",
}, "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
"outputs": [], ]
"source": [ },
"ws = Workspace.from_config()\n", {
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Register Datastores\n",
"cell_type": "markdown", "\n",
"metadata": {}, "In the code cell below, you will need to fill in the appropriate values for the workspace name, datastore name, subscription id, resource group, store name, tenant id, client id, and client secret that are associated with your ADLS datastore. \n",
"source": [ "\n",
"## Register Datastores\n", "For background on registering your data store, consult this article:\n",
"\n", "\n",
"In the code cell below, you will need to fill in the appropriate values for the workspace name, datastore name, subscription id, resource group, store name, tenant id, client id, and client secret that are associated with your ADLS datastore. \n", "https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory\n",
"\n", "\n",
"For background on registering your data store, consult this article:\n", "### register datastores for Azure Data Lake and Azure Blob storage"
"\n", ]
"https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from msrest.exceptions import HttpOperationError\n",
"source": [ "\n",
"\n", "datastore_name='MyAdlsDatastore'\n",
"workspace = ws.name\n", "subscription_id=os.getenv(\"ADL_SUBSCRIPTION_62\", \"<my-subscription-id>\") # subscription id of ADLS account\n",
"datastore_name='MyAdlsDatastore'\n", "resource_group=os.getenv(\"ADL_RESOURCE_GROUP_62\", \"<my-resource-group>\") # resource group of ADLS account\n",
"subscription_id=os.getenv(\"ADL_SUBSCRIPTION_62\", \"<my-subscription-id>\") # subscription id of ADLS account\n", "store_name=os.getenv(\"ADL_STORENAME_62\", \"<my-datastore-name>\") # ADLS account name\n",
"resource_group=os.getenv(\"ADL_RESOURCE_GROUP_62\", \"<my-resource-group>\") # resource group of ADLS account\n", "tenant_id=os.getenv(\"ADL_TENANT_62\", \"<my-tenant-id>\") # tenant id of service principal\n",
"store_name=os.getenv(\"ADL_STORENAME_62\", \"<my-datastore-name>\") # ADLS account name\n", "client_id=os.getenv(\"ADL_CLIENTID_62\", \"<my-client-id>\") # client id of service principal\n",
"tenant_id=os.getenv(\"ADL_TENANT_62\", \"<my-tenant-id>\") # tenant id of service principal\n", "client_secret=os.getenv(\"ADL_CLIENT_SECRET_62\", \"<my-client-secret>\") # the secret of service principal\n",
"client_id=os.getenv(\"ADL_CLIENTID_62\", \"<my-client-id>\") # client id of service principal\n", "\n",
"client_secret=os.getenv(\"ADL_CLIENT_SECRET_62\", \"<my-client-secret>\") # the secret of service principal\n", "try:\n",
"\n", " adls_datastore = Datastore.get(ws, datastore_name)\n",
"try:\n", " print(\"found datastore with name: %s\" % datastore_name)\n",
" adls_datastore = Datastore.get(ws, datastore_name)\n", "except HttpOperationError:\n",
" print(\"found datastore with name: %s\" % datastore_name)\n", " adls_datastore = Datastore.register_azure_data_lake(\n",
"except:\n", " workspace=ws,\n",
" adls_datastore = Datastore.register_azure_data_lake(\n", " datastore_name=datastore_name,\n",
" workspace=ws,\n", " subscription_id=subscription_id, # subscription id of ADLS account\n",
" datastore_name=datastore_name,\n", " resource_group=resource_group, # resource group of ADLS account\n",
" subscription_id=subscription_id, # subscription id of ADLS account\n", " store_name=store_name, # ADLS account name\n",
" resource_group=resource_group, # resource group of ADLS account\n", " tenant_id=tenant_id, # tenant id of service principal\n",
" store_name=store_name, # ADLS account name\n", " client_id=client_id, # client id of service principal\n",
" tenant_id=tenant_id, # tenant id of service principal\n", " client_secret=client_secret) # the secret of service principal\n",
" client_id=client_id, # client id of service principal\n", " print(\"registered datastore with name: %s\" % datastore_name)\n",
" client_secret=client_secret) # the secret of service principal\n", "\n",
" print(\"registered datastore with name: %s\" % datastore_name)\n", "\n",
"\n", "\n",
"\n", "blob_datastore_name='MyBlobDatastore'\n",
"\n", "account_name=os.getenv(\"BLOB_ACCOUNTNAME_62\", \"<my-account-name>\") # Storage account name\n",
"blob_datastore_name='MyBlobDatastore'\n", "container_name=os.getenv(\"BLOB_CONTAINER_62\", \"<my-container-name>\") # Name of Azure blob container\n",
"account_name=os.getenv(\"BLOB_ACCOUNTNAME_62\", \"<my-account-name>\") # Storage account name\n", "account_key=os.getenv(\"BLOB_ACCOUNT_KEY_62\", \"<my-account-key>\") # Storage account key\n",
"container_name=os.getenv(\"BLOB_CONTAINER_62\", \"<my-container-name>\") # Name of Azure blob container\n", "\n",
"account_key=os.getenv(\"BLOB_ACCOUNT_KEY_62\", \"<my-account-key>\") # Storage account key\n", "try:\n",
"\n", " blob_datastore = Datastore.get(ws, blob_datastore_name)\n",
"try:\n", " print(\"found blob datastore with name: %s\" % blob_datastore_name)\n",
" blob_datastore = Datastore.get(ws, blob_datastore_name)\n", "except HttpOperationError:\n",
" print(\"found blob datastore with name: %s\" % blob_datastore_name)\n", " blob_datastore = Datastore.register_azure_blob_container(\n",
"except:\n", " workspace=ws,\n",
" blob_datastore = Datastore.register_azure_blob_container(\n", " datastore_name=blob_datastore_name,\n",
" workspace=ws,\n", " account_name=account_name, # Storage account name\n",
" datastore_name=blob_datastore_name,\n", " container_name=container_name, # Name of Azure blob container\n",
" account_name=account_name, # Storage account name\n", " account_key=account_key) # Storage account key\"\n",
" container_name=container_name, # Name of Azure blob container\n", " print(\"registered blob datastore with name: %s\" % blob_datastore_name)\n",
" account_key=account_key) # Storage account key\"\n", "\n",
" print(\"registered blob datastore with name: %s\" % blob_datastore_name)\n", "# CLI:\n",
"\n", "# az ml datastore register-blob -n <datastore-name> -a <account-name> -c <container-name> -k <account-key> [-t <sas-token>]"
"# CLI:\n", ]
"# az ml datastore register-blob -n <datastore-name> -a <account-name> -c <container-name> -k <account-key> [-t <sas-token>]" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "### register datastores for Azure SQL Server and Azure database for PostgreSQL"
"source": [ ]
"## Create DataReferences" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "\n",
"source": [ "sql_datastore_name=\"MySqlDatastore\"\n",
"adls_datastore = Datastore(workspace=ws, name=\"MyAdlsDatastore\")\n", "server_name=os.getenv(\"SQL_SERVERNAME_62\", \"<my-server-name>\") # Name of SQL server\n",
"\n", "database_name=os.getenv(\"SQL_DATBASENAME_62\", \"<my-database-name>\") # Name of SQL database\n",
"# adls\n", "client_id=os.getenv(\"SQL_CLIENTNAME_62\", \"<my-client-id>\") # client id of service principal with permissions to access database\n",
"adls_data_ref = DataReference(\n", "client_secret=os.getenv(\"SQL_CLIENTSECRET_62\", \"<my-client-secret>\") # the secret of service principal\n",
" datastore=adls_datastore,\n", "tenant_id=os.getenv(\"SQL_TENANTID_62\", \"<my-tenant-id>\") # tenant id of service principal\n",
" data_reference_name=\"adls_test_data\",\n", "\n",
" path_on_datastore=\"testdata\")\n", "try:\n",
"\n", " sql_datastore = Datastore.get(ws, sql_datastore_name)\n",
"blob_datastore = Datastore(workspace=ws, name=\"MyBlobDatastore\")\n", " print(\"found sql database datastore with name: %s\" % sql_datastore_name)\n",
"\n", "except HttpOperationError:\n",
"# blob data\n", " sql_datastore = Datastore.register_azure_sql_database(\n",
"blob_data_ref = DataReference(\n", " workspace=ws,\n",
" datastore=blob_datastore,\n", " datastore_name=sql_datastore_name,\n",
" data_reference_name=\"blob_test_data\",\n", " server_name=server_name,\n",
" path_on_datastore=\"testdata\")\n", " database_name=database_name,\n",
"\n", " client_id=client_id,\n",
"print(\"obtained adls, blob data references\")" " client_secret=client_secret,\n",
] " tenant_id=tenant_id)\n",
}, " print(\"registered sql databse datastore with name: %s\" % sql_datastore_name)\n",
{ "\n",
"cell_type": "markdown", " \n",
"metadata": {}, "psql_datastore_name=\"MyPostgreSqlDatastore\"\n",
"source": [ "server_name=os.getenv(\"PSQL_SERVERNAME_62\", \"<my-server-name>\") # Name of PostgreSQL server \n",
"## Setup Data Factory Account" "database_name=os.getenv(\"PSQL_DATBASENAME_62\", \"<my-database-name>\") # Name of PostgreSQL database\n",
] "user_id=os.getenv(\"PSQL_USERID_62\", \"<my-user-id>\") # user id\n",
}, "user_password=os.getenv(\"PSQL_USERPW_62\", \"<my-user-password>\") # user password\n",
{ "\n",
"cell_type": "code", "try:\n",
"execution_count": null, " psql_datastore = Datastore.get(ws, psql_datastore_name)\n",
"metadata": {}, " print(\"found PostgreSQL database datastore with name: %s\" % psql_datastore_name)\n",
"outputs": [], "except HttpOperationError:\n",
"source": [ " psql_datastore = Datastore.register_azure_postgre_sql(\n",
"data_factory_name = 'adftest'\n", " workspace=ws,\n",
"\n", " datastore_name=psql_datastore,\n",
"def get_or_create_data_factory(workspace, factory_name):\n", " server_name=server_name,\n",
" try:\n", " database_name=database_name,\n",
" return DataFactoryCompute(workspace, factory_name)\n", " user_id=user_id,\n",
" except ComputeTargetException as e:\n", " user_password=user_password)\n",
" if 'ComputeTargetNotFound' in e.message:\n", " print(\"registered PostgreSQL databse datastore with name: %s\" % psql_datastore_name)\n",
" print('Data factory not found, creating...')\n", " "
" provisioning_config = DataFactoryCompute.provisioning_configuration()\n", ]
" data_factory = ComputeTarget.create(workspace, factory_name, provisioning_config)\n", },
" data_factory.wait_for_completion()\n", {
" return data_factory\n", "cell_type": "markdown",
" else:\n", "metadata": {},
" raise e\n", "source": [
" \n", "## Create DataReferences\n",
"data_factory_compute = get_or_create_data_factory(ws, data_factory_name)\n", "### create DataReferences for Azure Data Lake and Azure Blob storage"
"\n", ]
"print(\"setup data factory account complete\")\n", },
"\n", {
"# CLI:\n", "cell_type": "code",
"# Create: az ml computetarget setup datafactory -n <name>\n", "execution_count": null,
"# BYOC: az ml computetarget attach datafactory -n <name> -i <resource-id>" "metadata": {},
] "outputs": [],
}, "source": [
{ "adls_datastore = Datastore(workspace=ws, name=\"MyAdlsDatastore\")\n",
"cell_type": "markdown", "\n",
"metadata": {}, "# adls\n",
"source": [ "adls_data_ref = DataReference(\n",
"## Create a DataTransferStep" " datastore=adls_datastore,\n",
] " data_reference_name=\"adls_test_data\",\n",
}, " path_on_datastore=\"testdata\")\n",
{ "\n",
"cell_type": "markdown", "blob_datastore = Datastore(workspace=ws, name=\"MyBlobDatastore\")\n",
"metadata": {}, "\n",
"source": [ "# blob data\n",
"**DataTransferStep** is used to transfer data between Azure Blob, Azure Data Lake Store, and Azure SQL database.\n", "blob_data_ref = DataReference(\n",
"\n", " datastore=blob_datastore,\n",
"- **name:** Name of module\n", " data_reference_name=\"blob_test_data\",\n",
"- **source_data_reference:** Input connection that serves as source of data transfer operation.\n", " path_on_datastore=\"testdata\")\n",
"- **destination_data_reference:** Input connection that serves as destination of data transfer operation.\n", "\n",
"- **compute_target:** Azure Data Factory to use for transferring data.\n", "print(\"obtained adls, blob data references\")"
"- **allow_reuse:** Whether the step should reuse results of previous DataTransferStep when run with same inputs. Set as False to force data to be transferred again.\n", ]
"\n", },
"Optional arguments to explicitly specify whether a path corresponds to a file or a directory. These are useful when storage contains both file and directory with the same name or when creating a new destination path.\n", {
"\n", "cell_type": "markdown",
"- **source_reference_type:** An optional string specifying the type of source_data_reference. Possible values include: 'file', 'directory'. When not specified, we use the type of existing path or directory if it's a new path.\n", "metadata": {},
"- **destination_reference_type:** An optional string specifying the type of destination_data_reference. Possible values include: 'file', 'directory'. When not specified, we use the type of existing path or directory if it's a new path." "source": [
] "### create DataReferences for Azure SQL Server and Azure database for PostgreSQL"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": {}, "execution_count": null,
"outputs": [], "metadata": {},
"source": [ "outputs": [],
"transfer_adls_to_blob = DataTransferStep(\n", "source": [
" name=\"transfer_adls_to_blob\",\n", "from azureml.data.sql_data_reference import SqlDataReference\n",
" source_data_reference=adls_data_ref,\n", "\n",
" destination_data_reference=blob_data_ref,\n", "sql_datastore = Datastore(workspace=ws, name=\"MySqlDatastore\")\n",
" compute_target=data_factory_compute)\n", "\n",
"\n", "sql_query_data_ref = SqlDataReference(\n",
"print(\"data transfer step created\")" " datastore=sql_datastore,\n",
] " data_reference_name=\"sql_query_data_ref\",\n",
}, " sql_query=\"select top 1 * from TestData\")\n",
{ "\n",
"cell_type": "markdown", "\n",
"metadata": {}, "psql_datastore = Datastore(workspace=ws, name=\"MyPostgreSqlDatastore\")\n",
"source": [ "\n",
"## Build and Submit the Experiment" "psql_query_data_ref = SqlDataReference(\n",
] " datastore=psql_datastore,\n",
}, " data_reference_name=\"psql_query_data_ref\",\n",
{ " sql_query=\"SELECT * FROM testtable\")\n",
"cell_type": "code", "\n",
"execution_count": null, "print(\"obtained Sql server, PostgreSQL data references\")"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"pipeline = Pipeline(\n", "cell_type": "markdown",
" description=\"data_transfer_101\",\n", "metadata": {},
" workspace=ws,\n", "source": [
" steps=[transfer_adls_to_blob])\n", "## Setup Data Factory Account"
"\n", ]
"pipeline_run = Experiment(ws, \"Data_Transfer_example\").submit(pipeline)\n", },
"pipeline_run.wait_for_completion()" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "markdown", "outputs": [],
"metadata": {}, "source": [
"source": [ "data_factory_name = 'adftest'\n",
"### View Run Details" "\n",
] "def get_or_create_data_factory(workspace, factory_name):\n",
}, " try:\n",
{ " return DataFactoryCompute(workspace, factory_name)\n",
"cell_type": "code", " except ComputeTargetException as e:\n",
"execution_count": null, " if 'ComputeTargetNotFound' in e.message:\n",
"metadata": {}, " print('Data factory not found, creating...')\n",
"outputs": [], " provisioning_config = DataFactoryCompute.provisioning_configuration()\n",
"source": [ " data_factory = ComputeTarget.create(workspace, factory_name, provisioning_config)\n",
"from azureml.widgets import RunDetails\n", " data_factory.wait_for_completion()\n",
"RunDetails(pipeline_run).show()" " return data_factory\n",
] " else:\n",
}, " raise e\n",
{ " \n",
"cell_type": "markdown", "data_factory_compute = get_or_create_data_factory(ws, data_factory_name)\n",
"metadata": {}, "\n",
"source": [ "print(\"setup data factory account complete\")\n",
"# Next: Databricks as a Compute Target\n", "\n",
"To use Databricks as a compute target from Azure Machine Learning Pipeline, a DatabricksStep is used. This [notebook](./aml-pipelines-use-databricks-as-compute-target.ipynb) demonstrates the use of a DatabricksStep in an Azure Machine Learning Pipeline." "# CLI:\n",
] "# Create: az ml computetarget setup datafactory -n <name>\n",
} "# BYOC: az ml computetarget attach datafactory -n <name> -i <resource-id>"
], ]
"metadata": { },
"authors": [ {
{ "cell_type": "markdown",
"name": "diray" "metadata": {},
} "source": [
"## Create a DataTransferStep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**DataTransferStep** is used to transfer data between Azure Blob, Azure Data Lake Store, and Azure SQL database.\n",
"\n",
"- **name:** Name of module\n",
"- **source_data_reference:** Input connection that serves as source of data transfer operation.\n",
"- **destination_data_reference:** Input connection that serves as destination of data transfer operation.\n",
"- **compute_target:** Azure Data Factory to use for transferring data.\n",
"- **allow_reuse:** Whether the step should reuse results of previous DataTransferStep when run with same inputs. Set as False to force data to be transferred again.\n",
"\n",
"Optional arguments to explicitly specify whether a path corresponds to a file or a directory. These are useful when storage contains both file and directory with the same name or when creating a new destination path.\n",
"\n",
"- **source_reference_type:** An optional string specifying the type of source_data_reference. Possible values include: 'file', 'directory'. When not specified, we use the type of existing path or directory if it's a new path.\n",
"- **destination_reference_type:** An optional string specifying the type of destination_data_reference. Possible values include: 'file', 'directory'. When not specified, we use the type of existing path or directory if it's a new path."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"transfer_adls_to_blob = DataTransferStep(\n",
" name=\"transfer_adls_to_blob\",\n",
" source_data_reference=adls_data_ref,\n",
" destination_data_reference=blob_data_ref,\n",
" compute_target=data_factory_compute)\n",
"\n",
"print(\"data transfer step created\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"transfer_sql_to_blob = DataTransferStep(\n",
" name=\"transfer_sql_to_blob\",\n",
" source_data_reference=sql_query_data_ref,\n",
" destination_data_reference=blob_data_ref,\n",
" compute_target=data_factory_compute,\n",
" destination_reference_type='file')\n",
"\n",
"transfer_psql_to_blob = DataTransferStep(\n",
" name=\"transfer_psql_to_blob\",\n",
" source_data_reference=psql_query_data_ref,\n",
" destination_data_reference=blob_data_ref,\n",
" compute_target=data_factory_compute,\n",
" destination_reference_type='file')\n",
"\n",
"print(\"data transfer step created for Sql server and PostgreSQL\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build and Submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_01 = Pipeline(\n",
" description=\"data_transfer_01\",\n",
" workspace=ws,\n",
" steps=[transfer_adls_to_blob])\n",
"\n",
"pipeline_run_01 = Experiment(ws, \"Data_Transfer_example_01\").submit(pipeline_01)\n",
"pipeline_run_01.wait_for_completion()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_02 = Pipeline(\n",
" description=\"data_transfer_02\",\n",
" workspace=ws,\n",
" steps=[transfer_sql_to_blob,transfer_psql_to_blob])\n",
"\n",
"pipeline_run_02 = Experiment(ws, \"Data_Transfer_example_02\").submit(pipeline_02)\n",
"pipeline_run_02.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run_01).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run_02).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next: Databricks as a Compute Target\n",
"To use Databricks as a compute target from Azure Machine Learning Pipeline, a DatabricksStep is used. This [notebook](./aml-pipelines-use-databricks-as-compute-target.ipynb) demonstrates the use of a DatabricksStep in an Azure Machine Learning Pipeline."
]
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3", "authors": [
"language": "python", {
"name": "python3" "name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,376 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure Machine Learning Pipeline with AzureBatchStep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is used to demonstrate the use of AzureBatchStep in Azure Machine Learning Pipeline.\n",
"An AzureBatchStep will submit a job to an AzureBatch Compute to run a simple windows executable."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Azure Machine Learning and Pipeline SDK-specific Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace, Experiment\n",
"from azureml.core.compute import ComputeTarget, BatchCompute\n",
"from azureml.core.datastore import Datastore\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.exceptions import ComputeTargetException\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.steps import AzureBatchStep\n",
"\n",
"import os\n",
"from os import path\n",
"from tempfile import mkdtemp\n",
"\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json\n",
"\n",
"If you don't have a config.json file, please go through the configuration Notebook located [here](https://github.com/Azure/MachineLearningNotebooks). \n",
"\n",
"This sets you up with a working config file that has information on your workspace, subscription id, etc. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"print('Workspace Name: ' + ws.name, \n",
" 'Azure Region: ' + ws.location, \n",
" 'Subscription Id: ' + ws.subscription_id, \n",
" 'Resource Group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Attach Batch Compute to Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To submit jobs to Azure Batch service, you must attach your Azure Batch account to the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"batch_compute_name = 'mybatchcompute' # Name to associate with new compute in workspace\n",
"\n",
"# Batch account details needed to attach as compute to workspace\n",
"batch_account_name = \"<batch_account_name>\" # Name of the Batch account\n",
"batch_resource_group = \"<batch_resource_group>\" # Name of the resource group which contains this account\n",
"\n",
"try:\n",
" # check if already attached\n",
" batch_compute = BatchCompute(ws, batch_compute_name)\n",
"except ComputeTargetException:\n",
" print('Attaching Batch compute...')\n",
" provisioning_config = BatchCompute.attach_configuration(resource_group=batch_resource_group, \n",
" account_name=batch_account_name)\n",
" batch_compute = ComputeTarget.attach(ws, batch_compute_name, provisioning_config)\n",
" batch_compute.wait_for_completion()\n",
" print(\"Provisioning state:{}\".format(batch_compute.provisioning_state))\n",
" print(\"Provisioning errors:{}\".format(batch_compute.provisioning_errors))\n",
"\n",
"print(\"Using Batch compute:{}\".format(batch_compute.cluster_resource_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Datastore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Setting up the Blob storage associated with the workspace. \n",
"The following call retrieves the Azure Blob Store associated with your workspace. \n",
"Note that workspaceblobstore is **the name of this store and CANNOT BE CHANGED and must be used as is**. \n",
" \n",
"If you want to register another Datastore, please follow the instructions from here:\n",
"https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data#register-a-datastore"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"datastore = Datastore(ws, \"workspaceblobstore\")\n",
"\n",
"print('Datastore details:')\n",
"print('Datastore Account Name: ' + datastore.account_name)\n",
"print('Datastore Workspace Name: ' + datastore.workspace.name)\n",
"print('Datastore Container Name: ' + datastore.container_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Input and Output"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this example we will upload a file in the provided Datastore. These are some helper methods to achieve that."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def create_local_file(content, file_name):\n",
" # create a file in a local temporary directory\n",
" temp_dir = mkdtemp()\n",
" with open(path.join(temp_dir, file_name), 'w') as f:\n",
" f.write(content)\n",
" return temp_dir\n",
"\n",
"\n",
"def upload_file_to_datastore(datastore, file_name, content):\n",
" dir = create_local_file(content=content, file_name=file_name)\n",
" datastore.upload(src_dir=dir, overwrite=True, show_progress=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we associate the input DataReference with an existing file in the provided Datastore. Feel free to upload the file of your choice manually or use the *upload_file_to_datastore* method. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"file_name=\"input.txt\"\n",
"\n",
"upload_file_to_datastore(datastore=datastore, \n",
" file_name=file_name, \n",
" content=\"this is the content of the file\")\n",
"\n",
"testdata = DataReference(datastore=datastore, \n",
" path_on_datastore=file_name, \n",
" data_reference_name=\"input\")\n",
"\n",
"outputdata = PipelineData(name=\"output\", datastore=datastore)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup AzureBatch Job Binaries"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"AzureBatch can run a task within the job and here we put a simple .cmd file to be executed. Feel free to put any binaries in the folder, or modify the .cmd file as needed, they will be uploaded once we create the AzureBatch Step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"binaries_folder = \"azurebatch/job_binaries\"\n",
"if not os.path.isdir(binaries_folder):\n",
" os.mkdir(binaries_folder)\n",
"\n",
"file_name=\"azurebatch.cmd\"\n",
"with open(path.join(binaries_folder, file_name), 'w') as f:\n",
" f.write(\"copy \\\"%1\\\" \\\"%2\\\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an AzureBatchStep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"AzureBatchStep is used to submit a job to the attached Azure Batch compute.\n",
"- **name:** Name of the step\n",
"- **pool_id:** Name of the pool, it can be an existing pool, or one that will be created when the job is submitted\n",
"- **inputs:** List of inputs that will be processed by the job\n",
"- **outputs:** List of outputs the job will create\n",
"- **executable:** The executable that will run as part of the job\n",
"- **arguments:** Arguments for the executable. They can be plain string format, inputs, outputs or parameters\n",
"- **compute_target:** The compute target where the job will run.\n",
"- **source_directory:** The local directory with binaries to be executed by the job\n",
"\n",
"Optional parameters:\n",
"\n",
"- **create_pool:** Boolean flag to indicate whether create the pool before running the jobs\n",
"- **delete_batch_job_after_finish:** Boolean flag to indicate whether to delete the job from Batch account after it's finished\n",
"- **delete_batch_pool_after_finish:** Boolean flag to indicate whether to delete the pool after the job finishes\n",
"- **is_positive_exit_code_failure:** Boolean flag to indicate if the job fails if the task exists with a positive code\n",
"- **vm_image_urn:** If create_pool is true and VM uses VirtualMachineConfiguration. \n",
" Value format: 'urn:publisher:offer:sku'. \n",
" Example: urn:MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter \n",
" For more details: \n",
" https://docs.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage#table-of-commonly-used-windows-images and \n",
" https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cli-ps-findimage#find-specific-images\n",
"- **run_task_as_admin:** Boolean flag to indicate if the task should run with Admin privileges\n",
"- **target_compute_nodes:** Assumes create_pool is true, indicates how many compute nodes will be added to the pool\n",
"- **source_directory:** Local folder that contains the module binaries, executable, assemblies etc.\n",
"- **executable:** Name of the command/executable that will be executed as part of the job\n",
"- **arguments:** Arguments for the command/executable\n",
"- **inputs:** List of input port bindings\n",
"- **outputs:** List of output port bindings\n",
"- **vm_size:** If create_pool is true, indicating Virtual machine size of the compute nodes\n",
"- **compute_target:** BatchCompute compute\n",
"- **allow_reuse:** Whether the module should reuse previous results when run with the same settings/inputs\n",
"- **version:** A version tag to denote a change in functionality for the module"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"step = AzureBatchStep(\n",
" name=\"Azure Batch Job\",\n",
" pool_id=\"MyPoolName\", # Replace this with the pool name of your choice\n",
" inputs=[testdata],\n",
" outputs=[outputdata],\n",
" executable=\"azurebatch.cmd\",\n",
" arguments=[testdata, outputdata],\n",
" compute_target=batch_compute,\n",
" source_directory=binaries_folder,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build and Submit the Pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline = Pipeline(workspace=ws, steps=[step])\n",
"pipeline_run = Experiment(ws, 'azurebatch_experiment').submit(pipeline)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Visualize the Running Pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,397 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure Machine Learning Pipeline with HyperDriveStep\n",
"\n",
"\n",
"This notebook is used to demonstrate the use of HyperDriveStep in AML Pipeline.\n",
"\n",
"## Azure Machine Learning and Pipeline SDK-specific imports\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import shutil\n",
"import urllib\n",
"import azureml.core\n",
"from azureml.core import Workspace, Experiment\n",
"from azureml.core.datastore import Datastore\n",
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.exceptions import ComputeTargetException\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.steps import HyperDriveStep\n",
"from azureml.pipeline.core import Pipeline\n",
"from azureml.train.dnn import TensorFlow\n",
"from azureml.train.hyperdrive import *\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize workspace\n",
"\n",
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Azure ML experiment\n",
"Let's create an experiment named \"tf-mnist\" and a folder to hold the training scripts. The script runs will be recorded under the experiment in Azure.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"script_folder = './tf-mnist'\n",
"os.makedirs(script_folder, exist_ok=True)\n",
"\n",
"exp = Experiment(workspace=ws, name='tf-mnist')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download MNIST dataset\n",
"In order to train on the MNIST dataset we will first need to download it from Yan LeCun's web site directly and save them in a `data` folder locally."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"os.makedirs('./data/mnist', exist_ok=True)\n",
"\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename = './data/mnist/train-images.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename = './data/mnist/train-labels.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename = './data/mnist/test-images.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename = './data/mnist/test-labels.gz')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Upload MNIST dataset to blob datastore \n",
"A [datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data) is a place where data can be stored that is then made accessible to a Run either by means of mounting or copying the data to the compute target. In the next step, we will use Azure Blob Storage and upload the training and test set into the Azure Blob datastore, which we will then later be mount on a Batch AI cluster for training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds = ws.get_default_datastore()\n",
"ds.upload(src_dir='./data/mnist', target_path='mnist', overwrite=True, show_progress=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retrieve or create a Azure Machine Learning compute\n",
"Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Azure Machine Learning Compute in the current workspace, if it doesn't already exist. We will then run the training script on this compute target.\n",
"\n",
"If we could not find the compute with the given name in the previous cell, then we will create a new compute here. This process is broken down into the following steps:\n",
"\n",
"1. Create the configuration\n",
"2. Create the Azure Machine Learning compute\n",
"\n",
"**This process will take a few minutes and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell.**\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cluster_name = \"gpucluster\"\n",
"\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target {}.'.format(cluster_name))\n",
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_NC6\",\n",
" max_nodes=4)\n",
"\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
" compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n",
"\n",
"print(\"Azure Machine Learning Compute attached\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Copy the training files into the script folder\n",
"The TensorFlow training script is already created for you. You can simply copy it into the script folder, together with the utility library used to load compressed data file into numpy array."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# the training logic is in the tf_mnist.py file.\n",
"shutil.copy('./tf_mnist.py', script_folder)\n",
"\n",
"# the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays.\n",
"shutil.copy('./utils.py', script_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create TensorFlow estimator\n",
"Next, we construct an `azureml.train.dnn.TensorFlow` estimator object, use the Batch AI cluster as compute target, and pass the mount-point of the datastore to the training code as a parameter.\n",
"The TensorFlow estimator is providing a simple way of launching a TensorFlow training job on a compute target. It will automatically provide a docker image that has TensorFlow installed -- if additional pip or conda packages are required, their names can be passed in via the `pip_packages` and `conda_packages` arguments and they will be included in the resulting docker."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"est = TensorFlow(source_directory=script_folder, \n",
" compute_target=compute_target,\n",
" entry_script='tf_mnist.py', \n",
" use_gpu=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Intelligent hyperparameter tuning\n",
"We have trained the model with one set of hyperparameters, now let's how we can do hyperparameter tuning by launching multiple runs on the cluster. First let's define the parameter space using random sampling.\n",
"\n",
"In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, the best validation accuracy (`validation_acc`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ps = RandomParameterSampling(\n",
" {\n",
" '--batch-size': choice(25, 50, 100),\n",
" '--first-layer-neurons': choice(10, 50, 200, 300, 500),\n",
" '--second-layer-neurons': choice(10, 50, 200, 500),\n",
" '--learning-rate': loguniform(-6, -1)\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we will define an early termnination policy. The `BanditPolicy` basically states to check the job every 2 iterations. If the primary metric (defined later) falls outside of the top 10% range, Azure ML terminate the job. This saves us from continuing to explore hyperparameters that don't show promise of helping reach our target metric.\n",
"\n",
"Refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-tune-hyperparameters#specify-an-early-termination-policy) for more information on the BanditPolicy and other policies available."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"early_termination_policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we are ready to configure a run configuration object, and specify the primary metric `validation_acc` that's recorded in your training runs. If you go back to visit the training script, you will notice that this value is being logged after every epoch (a full batch set). We also want to tell the service that we are looking to maximizing this value. We also set the number of samples to 20, and maximal concurrent job to 4, which is the same as the number of nodes in our computer cluster."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"hd_config = HyperDriveRunConfig(estimator=est, \n",
" hyperparameter_sampling=ps,\n",
" policy=early_termination_policy,\n",
" primary_metric_name='validation_acc', \n",
" primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, \n",
" max_total_runs=1,\n",
" max_concurrent_runs=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add HyperDrive as a step of pipeline\n",
"\n",
"Let's setup a data reference for inputs of hyperdrive step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_folder = DataReference(\n",
" datastore=ds,\n",
" data_reference_name=\"mnist_data\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### HyperDriveStep\n",
"HyperDriveStep can be used to run HyperDrive job as a step in pipeline.\n",
"- **name:** Name of the step\n",
"- **hyperdrive_run_config:** A HyperDriveRunConfig that defines the configuration for this HyperDrive run\n",
"- **estimator_entry_script_arguments:** List of command-line arguments for estimator entry script\n",
"- **inputs:** List of input port bindings\n",
"- **outputs:** List of output port bindings\n",
"- **metrics_output:** Optional value specifying the location to store HyperDrive run metrics as a JSON file\n",
"- **allow_reuse:** whether to allow reuse\n",
"- **version:** version\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"hd_step = HyperDriveStep(\n",
" name=\"hyperdrive_module\",\n",
" hyperdrive_run_config=hd_config,\n",
" estimator_entry_script_arguments=['--data-folder', data_folder],\n",
" inputs=[data_folder])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline = Pipeline(workspace=ws, steps=[hd_step])\n",
"pipeline_run = Experiment(ws, 'Hyperdrive_Test').submit(pipeline)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Monitor using widget"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Wait for the completion of this Pipeline run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_run.wait_for_completion()"
]
}
],
"metadata": {
"authors": [
{
"name": "sonnyp"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,368 +1,365 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n", "Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# How to Publish a Pipeline and Invoke the REST endpoint\n", "# How to Publish a Pipeline and Invoke the REST endpoint\n",
"In this notebook, we will see how we can publish a pipeline and then invoke the REST endpoint." "In this notebook, we will see how we can publish a pipeline and then invoke the REST endpoint."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites and Azure Machine Learning Basics\n", "## Prerequisites and Azure Machine Learning Basics\n",
"Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. \n", "Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. \n",
"\n", "\n",
"### Initialization Steps" "### Initialization Steps"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import azureml.core\n", "import azureml.core\n",
"from azureml.core import Workspace, Run, Experiment, Datastore\n", "from azureml.core import Workspace, Datastore\n",
"from azureml.core.compute import AmlCompute\n", "from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n", "from azureml.core.compute import ComputeTarget\n",
"from azureml.core.compute import DataFactoryCompute\n", "\n",
"from azureml.widgets import RunDetails\n", "# Check core SDK version number\n",
"\n", "print(\"SDK version:\", azureml.core.VERSION)\n",
"# Check core SDK version number\n", "\n",
"print(\"SDK version:\", azureml.core.VERSION)\n", "from azureml.data.data_reference import DataReference\n",
"\n", "from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.data.data_reference import DataReference\n", "from azureml.pipeline.steps import PythonScriptStep\n",
"from azureml.pipeline.core import Pipeline, PipelineData, StepSequence\n", "from azureml.pipeline.core.graph import PipelineParameter\n",
"from azureml.pipeline.steps import PythonScriptStep\n", "\n",
"from azureml.pipeline.steps import DataTransferStep\n", "print(\"Pipeline SDK-specific imports completed\")\n",
"from azureml.pipeline.core import PublishedPipeline\n", "\n",
"from azureml.pipeline.core.graph import PipelineParameter\n", "ws = Workspace.from_config()\n",
"\n", "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')\n",
"print(\"Pipeline SDK-specific imports completed\")\n", "\n",
"\n", "# Default datastore (Azure file storage)\n",
"ws = Workspace.from_config()\n", "def_file_store = ws.get_default_datastore() \n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')\n", "print(\"Default datastore's name: {}\".format(def_file_store.name))\n",
"\n", "\n",
"# Default datastore (Azure file storage)\n", "def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
"def_file_store = ws.get_default_datastore() \n", "print(\"Blobstore's name: {}\".format(def_blob_store.name))\n",
"print(\"Default datastore's name: {}\".format(def_file_store.name))\n", "\n",
"\n", "# project folder\n",
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n", "project_folder = '.'"
"print(\"Blobstore's name: {}\".format(def_blob_store.name))\n", ]
"\n", },
"# project folder\n", {
"project_folder = '.'" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "### Compute Targets\n",
"cell_type": "markdown", "#### Retrieve an already attached Azure Machine Learning Compute"
"metadata": {}, ]
"source": [ },
"### Compute Targets\n", {
"#### Retrieve an already attached Azure Machine Learning Compute" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "from azureml.core.compute_target import ComputeTargetException\n",
"metadata": {}, "\n",
"outputs": [], "aml_compute_target = \"aml-compute\"\n",
"source": [ "try:\n",
"\n", " aml_compute = AmlCompute(ws, aml_compute_target)\n",
"aml_compute_target = \"aml-compute\"\n", " print(\"found existing compute target.\")\n",
"try:\n", "except ComputeTargetException:\n",
" aml_compute = AmlCompute(ws, aml_compute_target)\n", " print(\"creating new compute target\")\n",
" print(\"found existing compute target.\")\n", " \n",
"except:\n", " provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n",
" print(\"creating new compute target\")\n", " min_nodes = 1, \n",
" \n", " max_nodes = 4) \n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n", " aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n",
" min_nodes = 1, \n", " aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n"
" max_nodes = 4) \n", ]
" aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n", },
" aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "# For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
"outputs": [], "# example: un-comment the following line.\n",
"source": [ "# print(aml_compute.get_status().serialize())"
"# For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property\n", ]
"# example: un-comment the following line.\n", },
"# print(aml_compute.status.serialize())" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## Building Pipeline Steps with Inputs and Outputs\n",
"metadata": {}, "As mentioned earlier, a step in the pipeline can take data as input. This data can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline."
"source": [ ]
"## Building Pipeline Steps with Inputs and Outputs\n", },
"As mentioned earlier, a step in the pipeline can take data as input. This data can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "# Reference the data uploaded to blob storage using DataReference\n",
"outputs": [], "# Assign the datasource to blob_input_data variable\n",
"source": [ "blob_input_data = DataReference(\n",
"# Reference the data uploaded to blob storage using DataReference\n", " datastore=def_blob_store,\n",
"# Assign the datasource to blob_input_data variable\n", " data_reference_name=\"test_data\",\n",
"blob_input_data = DataReference(\n", " path_on_datastore=\"20newsgroups/20news.pkl\")\n",
" datastore=def_blob_store,\n", "print(\"DataReference object created\")"
" data_reference_name=\"test_data\",\n", ]
" path_on_datastore=\"20newsgroups/20news.pkl\")\n", },
"print(\"DataReference object created\")" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "# Define intermediate data using PipelineData\n",
"outputs": [], "processed_data1 = PipelineData(\"processed_data1\",datastore=def_blob_store)\n",
"source": [ "print(\"PipelineData object created\")"
"# Define intermediate data using PipelineData\n", ]
"processed_data1 = PipelineData(\"processed_data1\",datastore=def_blob_store)\n", },
"print(\"PipelineData object created\")" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "#### Define a Step that consumes a datasource and produces intermediate data.\n",
"metadata": {}, "In this step, we define a step that consumes a datasource and produces intermediate data.\n",
"source": [ "\n",
"#### Define a Step that consumes a datasource and produces intermediate data.\n", "**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** "
"In this step, we define a step that consumes a datasource and produces intermediate data.\n", ]
"\n", },
"**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** " {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "# trainStep consumes the datasource (Datareference) in the previous step\n",
"outputs": [], "# and produces processed_data1\n",
"source": [ "trainStep = PythonScriptStep(\n",
"# trainStep consumes the datasource (Datareference) in the previous step\n", " script_name=\"train.py\", \n",
"# and produces processed_data1\n", " arguments=[\"--input_data\", blob_input_data, \"--output_train\", processed_data1],\n",
"trainStep = PythonScriptStep(\n", " inputs=[blob_input_data],\n",
" script_name=\"train.py\", \n", " outputs=[processed_data1],\n",
" arguments=[\"--input_data\", blob_input_data, \"--output_train\", processed_data1],\n", " compute_target=aml_compute, \n",
" inputs=[blob_input_data],\n", " source_directory=project_folder\n",
" outputs=[processed_data1],\n", ")\n",
" compute_target=aml_compute, \n", "print(\"trainStep created\")"
" source_directory=project_folder\n", ]
")\n", },
"print(\"trainStep created\")" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "#### Define a Step that consumes intermediate data and produces intermediate data\n",
"metadata": {}, "In this step, we define a step that consumes an intermediate data and produces intermediate data.\n",
"source": [ "\n",
"#### Define a Step that consumes intermediate data and produces intermediate data\n", "**Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** "
"In this step, we define a step that consumes an intermediate data and produces intermediate data.\n", ]
"\n", },
"**Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** " {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "# extractStep to use the intermediate data produced by step4\n",
"outputs": [], "# This step also produces an output processed_data2\n",
"source": [ "processed_data2 = PipelineData(\"processed_data2\", datastore=def_blob_store)\n",
"# extractStep to use the intermediate data produced by step4\n", "\n",
"# This step also produces an output processed_data2\n", "extractStep = PythonScriptStep(\n",
"processed_data2 = PipelineData(\"processed_data2\", datastore=def_blob_store)\n", " script_name=\"extract.py\",\n",
"\n", " arguments=[\"--input_extract\", processed_data1, \"--output_extract\", processed_data2],\n",
"extractStep = PythonScriptStep(\n", " inputs=[processed_data1],\n",
" script_name=\"extract.py\",\n", " outputs=[processed_data2],\n",
" arguments=[\"--input_extract\", processed_data1, \"--output_extract\", processed_data2],\n", " compute_target=aml_compute, \n",
" inputs=[processed_data1],\n", " source_directory=project_folder)\n",
" outputs=[processed_data2],\n", "print(\"extractStep created\")"
" compute_target=aml_compute, \n", ]
" source_directory=project_folder)\n", },
"print(\"extractStep created\")" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "#### Define a Step that consumes multiple intermediate data and produces intermediate data\n",
"metadata": {}, "In this step, we define a step that consumes multiple intermediate data and produces intermediate data."
"source": [ ]
"#### Define a Step that consumes multiple intermediate data and produces intermediate data\n", },
"In this step, we define a step that consumes multiple intermediate data and produces intermediate data." {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "### PipelineParameter"
"metadata": {}, ]
"source": [ },
"### PipelineParameter" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "This step also has a [PipelineParameter](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.pipelineparameter?view=azure-ml-py) argument that help with calling the REST endpoint of the published pipeline."
"metadata": {}, ]
"source": [ },
"This step also has a [PipelineParameter](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.pipelineparameter?view=azure-ml-py) argument that help with calling the REST endpoint of the published pipeline." {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "# We will use this later in publishing pipeline\n",
"outputs": [], "pipeline_param = PipelineParameter(name=\"pipeline_arg\", default_value=10)\n",
"source": [ "print(\"pipeline parameter created\")"
"# We will use this later in publishing pipeline\n", ]
"pipeline_param = PipelineParameter(name=\"pipeline_arg\", default_value=10)\n", },
"print(\"pipeline parameter created\")" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**"
"metadata": {}, ]
"source": [ },
"**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "# Now define step6 that takes two inputs (both intermediate data), and produce an output\n",
"outputs": [], "processed_data3 = PipelineData(\"processed_data3\", datastore=def_blob_store)\n",
"source": [ "\n",
"# Now define step6 that takes two inputs (both intermediate data), and produce an output\n", "\n",
"processed_data3 = PipelineData(\"processed_data3\", datastore=def_blob_store)\n", "\n",
"\n", "compareStep = PythonScriptStep(\n",
"\n", " script_name=\"compare.py\",\n",
"\n", " arguments=[\"--compare_data1\", processed_data1, \"--compare_data2\", processed_data2, \"--output_compare\", processed_data3, \"--pipeline_param\", pipeline_param],\n",
"compareStep = PythonScriptStep(\n", " inputs=[processed_data1, processed_data2],\n",
" script_name=\"compare.py\",\n", " outputs=[processed_data3], \n",
" arguments=[\"--compare_data1\", processed_data1, \"--compare_data2\", processed_data2, \"--output_compare\", processed_data3, \"--pipeline_param\", pipeline_param],\n", " compute_target=aml_compute, \n",
" inputs=[processed_data1, processed_data2],\n", " source_directory=project_folder)\n",
" outputs=[processed_data3], \n", "print(\"compareStep created\")"
" compute_target=aml_compute, \n", ]
" source_directory=project_folder)\n", },
"print(\"compareStep created\")" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "#### Build the pipeline"
"metadata": {}, ]
"source": [ },
"#### Build the pipeline" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "pipeline1 = Pipeline(workspace=ws, steps=[compareStep])\n",
"outputs": [], "print (\"Pipeline is built\")\n",
"source": [ "\n",
"pipeline1 = Pipeline(workspace=ws, steps=[compareStep])\n", "pipeline1.validate()\n",
"print (\"Pipeline is built\")\n", "print(\"Simple validation complete\") "
"\n", ]
"pipeline1.validate()\n", },
"print(\"Simple validation complete\") " {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "## Publish the pipeline"
"metadata": {}, ]
"source": [ },
"## Publish the pipeline" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "published_pipeline1 = pipeline1.publish(name=\"My_New_Pipeline\", description=\"My Published Pipeline Description\")\n",
"outputs": [], "print(published_pipeline1.id)"
"source": [ ]
"published_pipeline1 = pipeline1.publish(name=\"My_New_Pipeline\", description=\"My Published Pipeline Description\")\n", },
"print(published_pipeline1.id)" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "### Run published pipeline using its REST endpoint"
"metadata": {}, ]
"source": [ },
"### Run published pipeline using its REST endpoint" {
] "cell_type": "code",
}, "execution_count": null,
{ "metadata": {},
"cell_type": "code", "outputs": [],
"execution_count": null, "source": [
"metadata": {}, "from azureml.core.authentication import InteractiveLoginAuthentication\n",
"outputs": [], "import requests\n",
"source": [ "\n",
"from azureml.core.authentication import AzureCliAuthentication\n", "auth = InteractiveLoginAuthentication()\n",
"import requests\n", "aad_token = auth.get_authentication_header()\n",
"\n", "\n",
"cli_auth = AzureCliAuthentication()\n", "rest_endpoint1 = published_pipeline1.endpoint\n",
"aad_token = cli_auth.get_authentication_header()\n", "\n",
"\n", "print(rest_endpoint1)\n",
"rest_endpoint1 = published_pipeline1.endpoint\n", "\n",
"\n", "# specify the param when running the pipeline\n",
"print(rest_endpoint1)\n", "response = requests.post(rest_endpoint1, \n",
"\n", " headers=aad_token, \n",
"# specify the param when running the pipeline\n", " json={\"ExperimentName\": \"My_Pipeline1\",\n",
"response = requests.post(rest_endpoint1, \n", " \"RunSource\": \"SDK\",\n",
" headers=aad_token, \n", " \"ParameterAssignments\": {\"pipeline_arg\": 45}})\n",
" json={\"ExperimentName\": \"My_Pipeline1\",\n", "run_id = response.json()[\"Id\"]\n",
" \"RunSource\": \"SDK\",\n", "\n",
" \"ParameterAssignments\": {\"pipeline_arg\": 45}})\n", "print(run_id)"
"run_id = response.json()[\"Id\"]\n", ]
"\n", },
"print(run_id)" {
] "cell_type": "markdown",
}, "metadata": {},
{ "source": [
"cell_type": "markdown", "# Next: Data Transfer\n",
"metadata": {}, "The next [notebook](./aml-pipelines-data-transfer.ipynb) will showcase data transfer steps between different types of data stores."
"source": [ ]
"# Next: Data Transfer\n", }
"The next [notebook](./aml-pipelines-data-transfer.ipynb) will showcase data transfer steps between different types of data stores."
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3", "authors": [
"language": "python", {
"name": "python3" "name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,450 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to Setup a Schedule for a Published Pipeline\n",
"In this notebook, we will show you how you can run an already published pipeline on a schedule."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites and AML Basics\n",
"Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc.\n",
"\n",
"### Initialization Steps"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compute Targets\n",
"#### Retrieve an already attached Azure Machine Learning Compute"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Run, Experiment, Datastore\n",
"\n",
"from azureml.widgets import RunDetails\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
"aml_compute_target = \"aml-compute\"\n",
"try:\n",
" aml_compute = AmlCompute(ws, aml_compute_target)\n",
" print(\"Found existing compute target: {}\".format(aml_compute_target))\n",
"except:\n",
" print(\"Creating new compute target: {}\".format(aml_compute_target))\n",
" \n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n",
" min_nodes = 1, \n",
" max_nodes = 4) \n",
" aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n",
" aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build and Publish Pipeline\n",
"Build a simple pipeline, publish it and add a schedule to run it."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define a pipeline step\n",
"Define a single step pipeline for demonstration purpose."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.steps import PythonScriptStep\n",
"\n",
"\n",
"# project folder\n",
"project_folder = 'scripts'\n",
"\n",
"trainStep = PythonScriptStep(\n",
" name=\"Training_Step\",\n",
" script_name=\"train.py\", \n",
" compute_target=aml_compute_target, \n",
" source_directory=project_folder\n",
")\n",
"print(\"TrainStep created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline\n",
"\n",
"pipeline1 = Pipeline(workspace=ws, steps=[trainStep])\n",
"print (\"Pipeline is built\")\n",
"\n",
"pipeline1.validate()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Publish the pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"\n",
"timenow = datetime.now().strftime('%m-%d-%Y-%H-%M')\n",
"\n",
"pipeline_name = timenow + \"-Pipeline\"\n",
"print(pipeline_name)\n",
"\n",
"published_pipeline1 = pipeline1.publish(\n",
" name=pipeline_name, \n",
" description=pipeline_name)\n",
"print(\"Newly published pipeline id: {}\".format(published_pipeline1.id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Schedule Operations\n",
"Schedule operations require id of a published pipeline. You can get all published pipelines and do Schedule operations on them, or if you already know the id of the published pipeline, you can use it directly as well.\n",
"### Get published pipeline ID"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import PublishedPipeline\n",
"\n",
"# You could retrieve all pipelines that are published, or \n",
"# just get the published pipeline object that you have the ID for.\n",
"\n",
"# Get all published pipeline objects in the workspace\n",
"all_pub_pipelines = PublishedPipeline.get_all(ws)\n",
"\n",
"# We will iterate through the list of published pipelines and \n",
"# use the last ID in the list for Schelue operations: \n",
"print(\"Published pipelines found in the workspace:\")\n",
"for pub_pipeline in all_pub_pipelines:\n",
" print(pub_pipeline.id)\n",
" pub_pipeline_id = pub_pipeline.id\n",
"\n",
"print(\"Published pipeline id to be used for Schedule operations: {}\".format(pub_pipeline_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a schedule for the pipeline using a recurrence\n",
"This schedule will run on a specified recurrence interval."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core.schedule import ScheduleRecurrence, Schedule\n",
"\n",
"recurrence = ScheduleRecurrence(frequency=\"Day\", interval=2, hours=[22], minutes=[30]) # Runs every other day at 10:30pm\n",
"\n",
"schedule = Schedule.create(workspace=ws, name=\"My_Schedule\",\n",
" pipeline_id=pub_pipeline_id, \n",
" experiment_name='Schedule_Run',\n",
" recurrence=recurrence,\n",
" wait_for_provisioning=True,\n",
" description=\"Schedule Run\")\n",
"\n",
"# You may want to make sure that the schedule is provisioned properly\n",
"# before making any further changes to the schedule\n",
"\n",
"print(\"Created schedule with id: {}\".format(schedule.id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note: Set the `wait_for_provisioning` flag to False if you do not want to wait for the call to provision the schedule in the backend."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get all schedules for a given pipeline\n",
"Once you have the published pipeline ID, then you can get all schedules for that pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"schedules = Schedule.get_all(ws, pipeline_id=pub_pipeline_id)\n",
"\n",
"# We will iterate through the list of schedules and \n",
"# use the last ID in the list for further operations: \n",
"print(\"Found these schedules for the pipeline id {}:\".format(pub_pipeline_id))\n",
"for schedule in schedules: \n",
" print(schedule.id)\n",
" schedule_id = schedule.id\n",
"\n",
"print(\"Schedule id to be used for schedule operations: {}\".format(schedule_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get all schedules in your workspace\n",
"You can also iterate through all schedules in your workspace if needed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use active_only=False to get all schedules including disabled schedules\n",
"schedules = Schedule.get_all(ws, active_only=True) \n",
"print(\"Your workspace has the following schedules set up:\")\n",
"for schedule in schedules:\n",
" print(\"{} (Published pipeline: {}\".format(schedule.id, schedule.pipeline_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get the schedule"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
"print(\"Using schedule with id: {}\".format(fetched_schedule.id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Disable the schedule"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
"# for the call to provision the schedule in the backend.\n",
"fetched_schedule.disable(wait_for_provisioning=True)\n",
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
"print(\"Disabled schedule {}. New status is: {}\".format(fetched_schedule.id, fetched_schedule.status))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Reactivate the schedule"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
"# for the call to provision the schedule in the backend.\n",
"fetched_schedule.activate(wait_for_provisioning=True)\n",
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
"print(\"Activated schedule {}. New status is: {}\".format(fetched_schedule.id, fetched_schedule.status))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Change recurrence of the schedule"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
"# for the call to provision the schedule in the backend.\n",
"recurrence = ScheduleRecurrence(frequency=\"Hour\", interval=2) # Runs every two hours\n",
"\n",
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
"\n",
"fetched_schedule.update(name=\"My_Updated_Schedule\", \n",
" description=\"Updated_Schedule_Run\", \n",
" status='Active', \n",
" wait_for_provisioning=True,\n",
" recurrence=recurrence)\n",
"\n",
"fetched_schedule = Schedule.get(ws, fetched_schedule.id)\n",
"\n",
"print(\"Updated schedule:\", fetched_schedule.id, \n",
" \"\\nNew name:\", fetched_schedule.name,\n",
" \"\\nNew frequency:\", fetched_schedule.recurrence.frequency,\n",
" \"\\nNew status:\", fetched_schedule.status)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a schedule for the pipeline using a Datastore\n",
"This schedule will run when additions or modifications are made to Blobs in the Datastore container.\n",
"Note: Only Blob Datastores are supported."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.datastore import Datastore\n",
"\n",
"datastore = Datastore(workspace=ws, name=\"workspaceblobstore\")\n",
"\n",
"schedule = Schedule.create(workspace=ws, name=\"My_Schedule\",\n",
" pipeline_id=pub_pipeline_id, \n",
" experiment_name='Schedule_Run',\n",
" datastore=datastore,\n",
" wait_for_provisioning=True,\n",
" description=\"Schedule Run\")\n",
"\n",
"# You may want to make sure that the schedule is provisioned properly\n",
"# before making any further changes to the schedule\n",
"\n",
"print(\"Created schedule with id: {}\".format(schedule.id))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
"# for the call to provision the schedule in the backend.\n",
"schedule.disable(wait_for_provisioning=True)\n",
"schedule = Schedule.get(ws, schedule_id)\n",
"print(\"Disabled schedule {}. New status is: {}\".format(schedule.id, schedule.status))"
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,368 +1,367 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n", "Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# AML Pipeline with AdlaStep\n", "# AML Pipeline with AdlaStep\n",
"This notebook is used to demonstrate the use of AdlaStep in AML Pipeline." "\n",
] "This notebook is used to demonstrate the use of AdlaStep in AML Pipelines. [AdlaStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adla_step.adlastep?view=azure-ml-py) is used to run U-SQL scripts using Azure Data Lake Analytics service. "
}, ]
{ },
"cell_type": "markdown", {
"metadata": {}, "cell_type": "markdown",
"source": [ "metadata": {},
"## AML and Pipeline SDK-specific imports" "source": [
] "## AML and Pipeline SDK-specific imports"
}, ]
{ },
"cell_type": "code", {
"execution_count": null, "cell_type": "code",
"metadata": {}, "execution_count": null,
"outputs": [], "metadata": {},
"source": [ "outputs": [],
"import os\n", "source": [
"import azureml.core\n", "import os\n",
"from azureml.core.compute import ComputeTarget, DatabricksCompute\n", "from msrest.exceptions import HttpOperationError\n",
"from azureml.exceptions import ComputeTargetException\n", "\n",
"from azureml.core import Workspace, Run, Experiment\n", "import azureml.core\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n", "from azureml.exceptions import ComputeTargetException\n",
"from azureml.pipeline.steps import AdlaStep\n", "from azureml.core import Workspace, Experiment\n",
"from azureml.core.datastore import Datastore\n", "from azureml.core.compute import ComputeTarget, AdlaCompute\n",
"from azureml.data.data_reference import DataReference\n", "from azureml.core.datastore import Datastore\n",
"from azureml.core import attach_legacy_compute_target\n", "from azureml.data.data_reference import DataReference\n",
"\n", "from azureml.pipeline.core import Pipeline, PipelineData\n",
"# Check core SDK version number\n", "from azureml.pipeline.steps import AdlaStep\n",
"print(\"SDK version:\", azureml.core.VERSION)" "\n",
] "# Check core SDK version number\n",
}, "print(\"SDK version:\", azureml.core.VERSION)"
{ ]
"cell_type": "markdown", },
"metadata": {}, {
"source": [ "cell_type": "markdown",
"## Initialize Workspace\n", "metadata": {},
"\n", "source": [
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json" "## Initialize Workspace\n",
] "\n",
}, "Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json"
{ ]
"cell_type": "code", },
"execution_count": null, {
"metadata": { "cell_type": "code",
"tags": [ "execution_count": null,
"create workspace" "metadata": {
] "tags": [
}, "create workspace"
"outputs": [], ]
"source": [ },
"ws = Workspace.from_config()\n", "outputs": [],
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')" "source": [
] "ws = Workspace.from_config()\n",
}, "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
{ ]
"cell_type": "code", },
"execution_count": null, {
"metadata": {}, "cell_type": "markdown",
"outputs": [], "metadata": {},
"source": [ "source": [
"script_folder = '.'\n", "## Attach ADLA account to workspace\n",
"experiment_name = \"adla_101_experiment\"\n", "\n",
"ws._initialize_folder(experiment_name=experiment_name, directory=script_folder)" "To submit jobs to Azure Data Lake Analytics service, you must first attach your ADLA account to the workspace. You'll need to provide the account name and resource group of ADLA account to complete this part."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "code",
"metadata": {}, "execution_count": null,
"source": [ "metadata": {},
"## Register Datastore" "outputs": [],
] "source": [
}, "adla_compute_name = 'testadl' # Name to associate with new compute in workspace\n",
{ "\n",
"cell_type": "code", "# ADLA account details needed to attach as compute to workspace\n",
"execution_count": null, "adla_account_name = \"<adla_account_name>\" # Name of the Azure Data Lake Analytics account\n",
"metadata": {}, "adla_resource_group = \"<adla_resource_group>\" # Name of the resource group which contains this account\n",
"outputs": [], "\n",
"source": [ "try:\n",
"\n", " # check if already attached\n",
"workspace = ws.name\n", " adla_compute = AdlaCompute(ws, adla_compute_name)\n",
"datastore_name='MyAdlsDatastore'\n", "except ComputeTargetException:\n",
"subscription_id=os.getenv(\"ADL_SUBSCRIPTION_62\", \"<my-subscription-id>\") # subscription id of ADLS account\n", " print('attaching adla compute...')\n",
"resource_group=os.getenv(\"ADL_RESOURCE_GROUP_62\", \"<my-resource-group>\") # resource group of ADLS account\n", " attach_config = AdlaCompute.attach_configuration(resource_group=adla_resource_group, account_name=adla_account_name)\n",
"store_name=os.getenv(\"ADL_STORENAME_62\", \"<my-datastore-name>\") # ADLS account name\n", " adla_compute = ComputeTarget.attach(ws, adla_compute_name, attach_config)\n",
"tenant_id=os.getenv(\"ADL_TENANT_62\", \"<my-tenant-id>\") # tenant id of service principal\n", " adla_compute.wait_for_completion()\n",
"client_id=os.getenv(\"ADL_CLIENTID_62\", \"<my-client-id>\") # client id of service principal\n", "\n",
"client_secret=os.getenv(\"ADL_CLIENT_62_SECRET\", \"<my-client-secret>\") # the secret of service principal\n", "print(\"Using ADLA compute:{}\".format(adla_compute.cluster_resource_id))\n",
"\n", "print(\"Provisioning state:{}\".format(adla_compute.provisioning_state))\n",
"try:\n", "print(\"Provisioning errors:{}\".format(adla_compute.provisioning_errors))"
" adls_datastore = Datastore.get(ws, datastore_name)\n", ]
" print(\"found datastore with name: %s\" % datastore_name)\n", },
"except:\n", {
" adls_datastore = Datastore.register_azure_data_lake(\n", "cell_type": "markdown",
" workspace=ws,\n", "metadata": {},
" datastore_name=datastore_name,\n", "source": [
" subscription_id=subscription_id, # subscription id of ADLS account\n", "## Register Data Lake Storage as Datastore\n",
" resource_group=resource_group, # resource group of ADLS account\n", "\n",
" store_name=store_name, # ADLS account name\n", "To register Data Lake Storage as Datastore in workspace, you'll need account information like account name, resource group and subscription Id. \n",
" tenant_id=tenant_id, # tenant id of service principal\n", "\n",
" client_id=client_id, # client id of service principal\n", "> AdlaStep can only work with data stored in the **default** Data Lake Storage of the Data Lake Analytics account provided above. If the data you need to work with is in a non-default storage, you can use a DataTransferStep to copy the data before training. You can find the default storage by opening your Data Lake Analytics account in Azure portal and then navigating to 'Data sources' item under Settings in the left pane.\n",
" client_secret=client_secret) # the secret of service principal\n", "\n",
" print(\"registered datastore with name: %s\" % datastore_name)\n" "### Grant Azure AD application access to Data Lake Storage\n",
] "\n",
}, "You'll also need to provide an Active Directory application which can access Data Lake Storage. [This document](https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory) contains step-by-step instructions on how to create an AAD application and assign to Data Lake Storage. Couple of important notes when assigning permissions to AAD app:\n",
{ "\n",
"cell_type": "markdown", "- Access should be provided at root folder level.\n",
"metadata": {}, "- In 'Assign permissions' pane, select Read, Write, and Execute permissions for 'This folder and all children'. Add as 'An access permission entry and a default permission entry' to make sure application can access any new files created in the future."
"source": [ ]
"## Create DataReferences and PipelineData\n", },
"\n", {
"In the code cell below, replace datastorename with your default datastore name. Copy the file `testdata.txt` (located in the pipeline folder that this notebook is in) to the path on the datastore." "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "datastore_name = 'MyAdlsDatastore' # Name to associate with data store in workspace\n",
"metadata": {}, "\n",
"outputs": [], "# ADLS storage account details needed to register as a Datastore\n",
"source": [ "subscription_id = os.getenv(\"ADL_SUBSCRIPTION_62\", \"<my-subscription-id>\") # subscription id of ADLS account\n",
"datastorename = \"MyAdlsDatastore\"\n", "resource_group = os.getenv(\"ADL_RESOURCE_GROUP_62\", \"<my-resource-group>\") # resource group of ADLS account\n",
"\n", "store_name = os.getenv(\"ADL_STORENAME_62\", \"<my-datastore-name>\") # ADLS account name\n",
"adls_datastore = Datastore(workspace=ws, name=datastorename)\n", "tenant_id = os.getenv(\"ADL_TENANT_62\", \"<my-tenant-id>\") # tenant id of service principal\n",
"script_input = DataReference(\n", "client_id = os.getenv(\"ADL_CLIENTID_62\", \"<my-client-id>\") # client id of service principal\n",
" datastore=adls_datastore,\n", "client_secret = os.getenv(\"ADL_CLIENT_62_SECRET\", \"<my-client-secret>\") # the secret of service principal\n",
" data_reference_name=\"script_input\",\n", "\n",
" path_on_datastore=\"testdata/testdata.txt\")\n", "try:\n",
"\n", " adls_datastore = Datastore.get(ws, datastore_name)\n",
"script_output = PipelineData(\"script_output\", datastore=adls_datastore)\n", " print(\"found datastore with name: %s\" % datastore_name)\n",
"\n", "except HttpOperationError:\n",
"print(\"Created Pipeline Data\")" " adls_datastore = Datastore.register_azure_data_lake(\n",
] " workspace=ws,\n",
}, " datastore_name=datastore_name,\n",
{ " subscription_id=subscription_id, # subscription id of ADLS account\n",
"cell_type": "markdown", " resource_group=resource_group, # resource group of ADLS account\n",
"metadata": {}, " store_name=store_name, # ADLS account name\n",
"source": [ " tenant_id=tenant_id, # tenant id of service principal\n",
"## Setup Data Lake Account\n", " client_id=client_id, # client id of service principal\n",
"\n", " client_secret=client_secret) # the secret of service principal\n",
"ADLA can only use data that is located in the default data store associated with that ADLA account. Through Azure portal, check the name of the default data store corresponding to the ADLA account you are using below. Replace the value associated with `adla_compute_name` in the code cell below accordingly." " print(\"registered datastore with name: %s\" % datastore_name)"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": null, "metadata": {},
"metadata": {}, "source": [
"outputs": [], "## Setup inputs and outputs\n",
"source": [ "\n",
"adla_compute_name = 'testadl' # Replace this with your default compute\n", "For purpose of this demo, we're going to execute a simple U-SQL script that reads a CSV file and writes portion of content to a new text file. First, let's create our sample input which contains 3 columns: employee Id, name and department Id."
"\n", ]
"from azureml.core.compute import ComputeTarget, AdlaCompute\n", },
"\n", {
"def get_or_create_adla_compute(workspace, compute_name):\n", "cell_type": "code",
" try:\n", "execution_count": null,
" return AdlaCompute(workspace, compute_name)\n", "metadata": {},
" except ComputeTargetException as e:\n", "outputs": [],
" if 'ComputeTargetNotFound' in e.message:\n", "source": [
" print('adla compute not found, creating...')\n", "# create a folder to store files for our job\n",
" provisioning_config = AdlaCompute.provisioning_configuration()\n", "sample_folder = \"adla_sample\"\n",
" adla_compute = ComputeTarget.create(workspace, compute_name, provisioning_config)\n", "\n",
" adla_compute.wait_for_completion()\n", "if not os.path.isdir(sample_folder):\n",
" return adla_compute\n", " os.mkdir(sample_folder)"
" else:\n", ]
" raise e\n", },
" \n", {
"adla_compute = get_or_create_adla_compute(ws, adla_compute_name)\n", "cell_type": "code",
"\n", "execution_count": null,
"# CLI:\n", "metadata": {},
"# Create: az ml computetarget setup adla -n <name>\n", "outputs": [],
"# BYOC: az ml computetarget attach adla -n <name> -i <resource-id>" "source": [
] "%%writefile $sample_folder/sample_input.csv\n",
}, "1, Noah, 100\n",
{ "3, Liam, 100\n",
"cell_type": "markdown", "4, Emma, 100\n",
"metadata": {}, "5, Jacob, 100\n",
"source": [ "7, Jennie, 100"
"Once the above code cell completes, run the below to check your ADLA compute status:" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "code", "metadata": {},
"execution_count": null, "source": [
"metadata": {}, "Upload this file to Data Lake Storage at location `adla_sample/sample_input.csv` and create a DataReference to refer to this file."
"outputs": [], ]
"source": [ },
"print(\"ADLA compute state:{}\".format(adla_compute.provisioning_state))\n", {
"print(\"ADLA compute state:{}\".format(adla_compute.provisioning_errors))\n", "cell_type": "code",
"print(\"Using ADLA compute:{}\".format(adla_compute.cluster_resource_id))" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "markdown", "sample_input = DataReference(\n",
"metadata": {}, " datastore=adls_datastore,\n",
"source": [ " data_reference_name=\"employee_data\",\n",
"## Create an AdlaStep" " path_on_datastore=\"adla_sample/sample_input.csv\")"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"**AdlaStep** is used to run U-SQL script using Azure Data Lake Analytics.\n", "Create PipelineData object to store output produced by AdlaStep."
"\n", ]
"- **name:** Name of module\n", },
"- **script_name:** name of U-SQL script\n", {
"- **inputs:** List of input port bindings\n", "cell_type": "code",
"- **outputs:** List of output port bindings\n", "execution_count": null,
"- **adla_compute:** the ADLA compute to use for this job\n", "metadata": {},
"- **params:** Dictionary of name-value pairs to pass to U-SQL job *(optional)*\n", "outputs": [],
"- **degree_of_parallelism:** the degree of parallelism to use for this job *(optional)*\n", "source": [
"- **priority:** the priority value to use for the current job *(optional)*\n", "sample_output = PipelineData(\"sample_output\", datastore=adls_datastore)"
"- **runtime_version:** the runtime version of the Data Lake Analytics engine *(optional)*\n", ]
"- **root_folder:** folder that contains the script, assemblies etc. *(optional)*\n", },
"- **hash_paths:** list of paths to hash to detect a change (script file is always hashed) *(optional)*\n", {
"\n", "cell_type": "markdown",
"### Remarks\n", "metadata": {},
"\n", "source": [
"You can use `@@name@@` syntax in your script to refer to inputs, outputs, and params.\n", "## Write your U-SQL script\n",
"\n", "\n",
"* if `name` is the name of an input or output port binding, any occurences of `@@name@@` in the script\n", "Now let's write a U-Sql script that reads above CSV file and writes the name column to a new file.\n",
"are replaced with actual data path of corresponding port binding.\n", "\n",
"* if `name` matches any key in `params` dict, any occurences of `@@name@@` will be replaced with\n", "Instead of hard-coding paths in your script, you can use `@@name@@` syntax to refer to inputs, outputs, and parameters.\n",
"corresponding value in dict.\n", "\n",
"\n", "- If `name` is the name of an input or output port binding, any occurrences of `@@name@@` in the script are replaced with actual data path of corresponding port binding.\n",
"#### Sample script\n", "- If `name` matches any key in the `params` dictionary, any occurrences of `@@name@@` will be replaced with corresponding value in the dictionary.\n",
"\n", "\n",
"```\n", "Note the use of @@ syntax in the below script. Before submitting the job to Data Lake Analytics service, `@@emplyee_data@@` will be replaced with actual path of `sample_input.csv` in Data Lake Storage. Similarly, `@@sample_output@@` will be replaced with a path in Data Lake Storage which will be used to store intermediate output produced by the step."
"@resourcereader =\n", ]
" EXTRACT query string\n", },
" FROM \"@@script_input@@\"\n", {
" USING Extractors.Csv();\n", "cell_type": "code",
"\n", "execution_count": null,
"\n", "metadata": {},
"OUTPUT @resourcereader\n", "outputs": [],
"TO \"@@script_output@@\"\n", "source": [
"USING Outputters.Csv();\n", "%%writefile $sample_folder/sample_script.usql\n",
"```" "\n",
] "// Read employee information from csv file\n",
}, "@employees = \n",
{ " EXTRACT EmpId int, EmpName string, DeptId int\n",
"cell_type": "code", " FROM \"@@employee_data@@\"\n",
"execution_count": null, " USING Extractors.Csv();\n",
"metadata": {}, "\n",
"outputs": [], "// Export employee names to text file\n",
"source": [ "OUTPUT\n",
"adla_step = AdlaStep(\n", "(\n",
" name='adla_script_step',\n", " SELECT EmpName\n",
" script_name='test_adla_script.usql',\n", " FROM @employees\n",
" inputs=[script_input],\n", ")\n",
" outputs=[script_output],\n", "TO \"@@sample_output@@\"\n",
" compute_target=adla_compute)" "USING Outputters.Text();"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Build and Submit the Experiment" "## Create an AdlaStep\n",
] "\n",
}, "**[AdlaStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adla_step.adlastep?view=azure-ml-py)** is used to run U-SQL script using Azure Data Lake Analytics.\n",
{ "\n",
"cell_type": "code", "- **name:** Name of module\n",
"execution_count": null, "- **script_name:** name of U-SQL script file\n",
"metadata": {}, "- **inputs:** List of input port bindings\n",
"outputs": [], "- **outputs:** List of output port bindings\n",
"source": [ "- **compute_target:** the ADLA compute to use for this job\n",
"pipeline = Pipeline(\n", "- **params:** Dictionary of name-value pairs to pass to U-SQL job *(optional)*\n",
" description=\"adla_102\",\n", "- **degree_of_parallelism:** the degree of parallelism to use for this job *(optional)*\n",
" workspace=ws, \n", "- **priority:** the priority value to use for the current job *(optional)*\n",
" steps=[adla_step],\n", "- **runtime_version:** the runtime version of the Data Lake Analytics engine *(optional)*\n",
" default_source_directory=script_folder)\n", "- **source_directory:** folder that contains the script, assemblies etc. *(optional)*\n",
"\n", "- **hash_paths:** list of paths to hash to detect a change (script file is always hashed) *(optional)*"
"pipeline_run = Experiment(workspace, experiment_name).submit(pipeline)\n", ]
"pipeline_run.wait_for_completion()" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "markdown", "metadata": {},
"metadata": {}, "outputs": [],
"source": [ "source": [
"### View Run Details" "adla_step = AdlaStep(\n",
] " name='extract_employee_names',\n",
}, " script_name='sample_script.usql',\n",
{ " source_directory=sample_folder,\n",
"cell_type": "code", " inputs=[sample_input],\n",
"execution_count": null, " outputs=[sample_output],\n",
"metadata": {}, " compute_target=adla_compute)"
"outputs": [], ]
"source": [ },
"from azureml.widgets import RunDetails\n", {
"RunDetails(pipeline_run).show()" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Build and Submit the Experiment"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Examine the run\n", "cell_type": "code",
"You can cycle through the node_run objects and examine job logs, stdout, and stderr of each of the steps." "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "pipeline = Pipeline(workspace=ws, steps=[adla_step])\n",
"execution_count": null, "\n",
"metadata": {}, "pipeline_run = Experiment(ws, 'adla_sample').submit(pipeline)\n",
"outputs": [], "pipeline_run.wait_for_completion()"
"source": [ ]
"step_runs = pipeline_run.get_children()\n", },
"for step_run in step_runs:\n", {
" status = step_run.get_status()\n", "cell_type": "markdown",
" print('node', step_run.name, 'status:', status)\n", "metadata": {},
" if status == \"Failed\":\n", "source": [
" joblog = step_run.get_job_log()\n", "### View Run Details"
" print('job log:', joblog)\n", ]
" stdout_log = step_run.get_stdout_log()\n", },
" print('stdout log:', stdout_log)\n", {
" stderr_log = step_run.get_stderr_log()\n", "cell_type": "code",
" print('stderr log:', stderr_log)\n", "execution_count": null,
" with open(\"logs-\" + step_run.name + \".txt\", \"w\") as f:\n", "metadata": {},
" f.write(joblog)\n", "outputs": [],
" print(\"Job log written to logs-\"+ step_run.name + \".txt\")\n", "source": [
" if status == \"Finished\":\n", "from azureml.widgets import RunDetails\n",
" stdout_log = step_run.get_stdout_log()\n", "RunDetails(pipeline_run).show()"
" print('stdout log:', stdout_log)" ]
] }
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python [default]", "authors": [
"language": "python", {
"name": "python3" "name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,418 +1,414 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n", "Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Azure Machine Learning Pipelines with Data Dependency\n", "# Azure Machine Learning Pipelines with Data Dependency\n",
"In this notebook, we will see how we can build a pipeline with implicit data dependancy." "In this notebook, we will see how we can build a pipeline with implicit data dependancy."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites and Azure Machine Learning Basics\n", "## Prerequisites and Azure Machine Learning Basics\n",
"Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. \n", "Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. \n",
"\n", "\n",
"### Azure Machine Learning and Pipeline SDK-specific Imports" "### Azure Machine Learning and Pipeline SDK-specific Imports"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import azureml.core\n", "import azureml.core\n",
"from azureml.core import Workspace, Run, Experiment, Datastore\n", "from azureml.core import Workspace, Experiment, Datastore\n",
"from azureml.core.compute import AmlCompute\n", "from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n", "from azureml.core.compute import ComputeTarget\n",
"from azureml.core.compute import DataFactoryCompute\n", "from azureml.widgets import RunDetails\n",
"from azureml.widgets import RunDetails\n", "\n",
"\n", "# Check core SDK version number\n",
"# Check core SDK version number\n", "print(\"SDK version:\", azureml.core.VERSION)\n",
"print(\"SDK version:\", azureml.core.VERSION)\n", "\n",
"\n", "from azureml.data.data_reference import DataReference\n",
"from azureml.data.data_reference import DataReference\n", "from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.core import Pipeline, PipelineData, StepSequence\n", "from azureml.pipeline.steps import PythonScriptStep\n",
"from azureml.pipeline.steps import PythonScriptStep\n", "print(\"Pipeline SDK-specific imports completed\")"
"from azureml.pipeline.steps import DataTransferStep\n", ]
"from azureml.pipeline.core import PublishedPipeline\n", },
"from azureml.pipeline.core.graph import PipelineParameter\n", {
"\n", "cell_type": "markdown",
"print(\"Pipeline SDK-specific imports completed\")" "metadata": {},
] "source": [
}, "### Initialize Workspace\n",
{ "\n",
"cell_type": "markdown", "Initialize a [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class%29) object from persisted configuration."
"metadata": {}, ]
"source": [ },
"### Initialize Workspace\n", {
"\n", "cell_type": "code",
"Initialize a [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class%29) object from persisted configuration." "execution_count": null,
] "metadata": {
}, "tags": [
{ "create workspace"
"cell_type": "code", ]
"execution_count": null, },
"metadata": { "outputs": [],
"tags": [ "source": [
"create workspace" "ws = Workspace.from_config()\n",
] "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')\n",
}, "\n",
"outputs": [], "# Default datastore (Azure file storage)\n",
"source": [ "def_file_store = ws.get_default_datastore() \n",
"ws = Workspace.from_config()\n", "print(\"Default datastore's name: {}\".format(def_file_store.name))\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')\n", "\n",
"\n", "def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
"# Default datastore (Azure file storage)\n", "print(\"Blobstore's name: {}\".format(def_blob_store.name))"
"def_file_store = ws.get_default_datastore() \n", ]
"print(\"Default datastore's name: {}\".format(def_file_store.name))\n", },
"\n", {
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n", "cell_type": "code",
"print(\"Blobstore's name: {}\".format(def_blob_store.name))" "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "# project folder\n",
"execution_count": null, "project_folder = '.'\n",
"metadata": {}, " \n",
"outputs": [], "print('Sample projects will be created in {}.'.format(project_folder))"
"source": [ ]
"# project folder\n", },
"project_folder = '.'\n", {
" \n", "cell_type": "markdown",
"print('Sample projects will be created in {}.'.format(project_folder))" "metadata": {},
] "source": [
}, "### Required data and script files for the the tutorial\n",
{ "Sample files required to finish this tutorial are already copied to the project folder specified above. Even though the .py provided in the samples don't have much \"ML work,\" as a data scientist, you will work on this extensively as part of your work. To complete this tutorial, the contents of these files are not very important. The one-line files are for demostration purpose only."
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Required data and script files for the the tutorial\n", "cell_type": "markdown",
"Sample files required to finish this tutorial are already copied to the project folder specified above. Even though the .py provided in the samples don't have much \"ML work,\" as a data scientist, you will work on this extensively as part of your work. To complete this tutorial, the contents of these files are not very important. The one-line files are for demostration purpose only." "metadata": {},
] "source": [
}, "### Compute Targets\n",
{ "See the list of Compute Targets on the workspace."
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"### Compute Targets\n", "cell_type": "code",
"See the list of Compute Targets on the workspace." "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "cts = ws.compute_targets\n",
"execution_count": null, "for ct in cts:\n",
"metadata": {}, " print(ct)"
"outputs": [], ]
"source": [ },
"cts = ws.compute_targets\n", {
"for ct in cts:\n", "cell_type": "markdown",
" print(ct)" "metadata": {},
] "source": [
}, "#### Retrieve or create a Aml compute\n",
{ "Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Aml Compute in the current workspace, if it doesn't already exist. We will then run the training script on this compute target."
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"#### Retrieve or create a Aml compute\n", "cell_type": "code",
"Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Aml Compute in the current workspace, if it doesn't already exist. We will then run the training script on this compute target." "execution_count": null,
] "metadata": {},
}, "outputs": [],
{ "source": [
"cell_type": "code", "from azureml.core.compute_target import ComputeTargetException\n",
"execution_count": null, "\n",
"metadata": {}, "aml_compute_target = \"aml-compute\"\n",
"outputs": [], "try:\n",
"source": [ " aml_compute = AmlCompute(ws, aml_compute_target)\n",
"\n", " print(\"found existing compute target.\")\n",
"aml_compute_target = \"aml-compute\"\n", "except ComputeTargetException:\n",
"try:\n", " print(\"creating new compute target\")\n",
" aml_compute = AmlCompute(ws, aml_compute_target)\n", " \n",
" print(\"found existing compute target.\")\n", " provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n",
"except:\n", " min_nodes = 1, \n",
" print(\"creating new compute target\")\n", " max_nodes = 4) \n",
" \n", " aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n", " aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" min_nodes = 1, \n", " \n",
" max_nodes = 4) \n", "print(\"Aml Compute attached\")\n"
" aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n", ]
" aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n", },
" \n", {
"print(\"Aml Compute attached\")\n" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "# For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
"metadata": {}, "# example: un-comment the following line.\n",
"outputs": [], "# print(aml_compute.get_status().serialize())"
"source": [ ]
"# For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property\n", },
"# example: un-comment the following line.\n", {
"# print(aml_compute.status.serialize())" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "**Wait for this call to finish before proceeding (you will see the asterisk turning to a number).**\n",
"cell_type": "markdown", "\n",
"metadata": {}, "Now that you have created the compute target, let's see what the workspace's compute_targets() function returns. You should now see one entry named 'amlcompute' of type AmlCompute."
"source": [ ]
"**Wait for this call to finish before proceeding (you will see the asterisk turning to a number).**\n", },
"\n", {
"Now that you have created the compute target, let's see what the workspace's compute_targets() function returns. You should now see one entry named 'amlcompute' of type AmlCompute." "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "## Building Pipeline Steps with Inputs and Outputs\n",
"cell_type": "markdown", "As mentioned earlier, a step in the pipeline can take data as input. This data can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline.\n",
"metadata": {}, "\n",
"source": [ "### Datasources\n",
"## Building Pipeline Steps with Inputs and Outputs\n", "Datasource is represented by **[DataReference](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.data_reference.datareference?view=azure-ml-py)** object and points to data that lives in or is accessible from Datastore. DataReference could be a pointer to a file or a directory."
"As mentioned earlier, a step in the pipeline can take data as input. This data can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline.\n", ]
"\n", },
"### Datasources\n", {
"Datasource is represented by **[DataReference](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.data_reference.datareference?view=azure-ml-py)** object and points to data that lives in or is accessible from Datastore. DataReference could be a pointer to a file or a directory." "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "# Reference the data uploaded to blob storage using DataReference\n",
"metadata": {}, "# Assign the datasource to blob_input_data variable\n",
"outputs": [], "\n",
"source": [ "# DataReference(datastore, \n",
"# Reference the data uploaded to blob storage using DataReference\n", "# data_reference_name=None, \n",
"# Assign the datasource to blob_input_data variable\n", "# path_on_datastore=None, \n",
"\n", "# mode='mount', \n",
"# DataReference(datastore, \n", "# path_on_compute=None, \n",
"# data_reference_name=None, \n", "# overwrite=False)\n",
"# path_on_datastore=None, \n", "\n",
"# mode='mount', \n", "blob_input_data = DataReference(\n",
"# path_on_compute=None, \n", " datastore=def_blob_store,\n",
"# overwrite=False)\n", " data_reference_name=\"test_data\",\n",
"\n", " path_on_datastore=\"20newsgroups/20news.pkl\")\n",
"blob_input_data = DataReference(\n", "print(\"DataReference object created\")"
" datastore=def_blob_store,\n", ]
" data_reference_name=\"test_data\",\n", },
" path_on_datastore=\"20newsgroups/20news.pkl\")\n", {
"print(\"DataReference object created\")" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "### Intermediate/Output Data\n",
"cell_type": "markdown", "Intermediate data (or output of a Step) is represented by **[PipelineData](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata?view=azure-ml-py)** object. PipelineData can be produced by one step and consumed in another step by providing the PipelineData object as an output of one step and the input of one or more steps.\n",
"metadata": {}, "\n",
"source": [ "#### Constructing PipelineData\n",
"### Intermediate/Output Data\n", "- **name:** [*Required*] Name of the data item within the pipeline graph\n",
"Intermediate data (or output of a Step) is represented by **[PipelineData](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata?view=azure-ml-py)** object. PipelineData can be produced by one step and consumed in another step by providing the PipelineData object as an output of one step and the input of one or more steps.\n", "- **datastore_name:** Name of the Datastore to write this output to\n",
"\n", "- **output_name:** Name of the output\n",
"#### Constructing PipelineData\n", "- **output_mode:** Specifies \"upload\" or \"mount\" modes for producing output (default: mount)\n",
"- **name:** [*Required*] Name of the data item within the pipeline graph\n", "- **output_path_on_compute:** For \"upload\" mode, the path to which the module writes this output during execution\n",
"- **datastore_name:** Name of the Datastore to write this output to\n", "- **output_overwrite:** Flag to overwrite pre-existing data"
"- **output_name:** Name of the output\n", ]
"- **output_mode:** Specifies \"upload\" or \"mount\" modes for producing output (default: mount)\n", },
"- **output_path_on_compute:** For \"upload\" mode, the path to which the module writes this output during execution\n", {
"- **output_overwrite:** Flag to overwrite pre-existing data" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "# Define intermediate data using PipelineData\n",
"metadata": {}, "# Syntax\n",
"outputs": [], "\n",
"source": [ "# PipelineData(name, \n",
"# Define intermediate data using PipelineData\n", "# datastore=None, \n",
"# Syntax\n", "# output_name=None, \n",
"\n", "# output_mode='mount', \n",
"# PipelineData(name, \n", "# output_path_on_compute=None, \n",
"# datastore=None, \n", "# output_overwrite=None, \n",
"# output_name=None, \n", "# data_type=None, \n",
"# output_mode='mount', \n", "# is_directory=None)\n",
"# output_path_on_compute=None, \n", "\n",
"# output_overwrite=None, \n", "# Naming the intermediate data as processed_data1 and assigning it to the variable processed_data1.\n",
"# data_type=None, \n", "processed_data1 = PipelineData(\"processed_data1\",datastore=def_blob_store)\n",
"# is_directory=None)\n", "print(\"PipelineData object created\")"
"\n", ]
"# Naming the intermediate data as processed_data1 and assigning it to the variable processed_data1.\n", },
"processed_data1 = PipelineData(\"processed_data1\",datastore=def_blob_store)\n", {
"print(\"PipelineData object created\")" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "### Pipelines steps using datasources and intermediate data\n",
"cell_type": "markdown", "Machine learning pipelines can have many steps and these steps could use or reuse datasources and intermediate data. Here's how we construct such a pipeline:"
"metadata": {}, ]
"source": [ },
"### Pipelines steps using datasources and intermediate data\n", {
"Machine learning pipelines can have many steps and these steps could use or reuse datasources and intermediate data. Here's how we construct such a pipeline:" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "#### Define a Step that consumes a datasource and produces intermediate data.\n",
"cell_type": "markdown", "In this step, we define a step that consumes a datasource and produces intermediate data.\n",
"metadata": {}, "\n",
"source": [ "**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** "
"#### Define a Step that consumes a datasource and produces intermediate data.\n", ]
"In this step, we define a step that consumes a datasource and produces intermediate data.\n", },
"\n", {
"**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** " "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "# step4 consumes the datasource (Datareference) in the previous step\n",
"metadata": {}, "# and produces processed_data1\n",
"outputs": [], "trainStep = PythonScriptStep(\n",
"source": [ " script_name=\"train.py\", \n",
"# step4 consumes the datasource (Datareference) in the previous step\n", " arguments=[\"--input_data\", blob_input_data, \"--output_train\", processed_data1],\n",
"# and produces processed_data1\n", " inputs=[blob_input_data],\n",
"trainStep = PythonScriptStep(\n", " outputs=[processed_data1],\n",
" script_name=\"train.py\", \n", " compute_target=aml_compute, \n",
" arguments=[\"--input_data\", blob_input_data, \"--output_train\", processed_data1],\n", " source_directory=project_folder\n",
" inputs=[blob_input_data],\n", ")\n",
" outputs=[processed_data1],\n", "print(\"trainStep created\")"
" compute_target=aml_compute, \n", ]
" source_directory=project_folder\n", },
")\n", {
"print(\"trainStep created\")" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "#### Define a Step that consumes intermediate data and produces intermediate data\n",
"cell_type": "markdown", "In this step, we define a step that consumes an intermediate data and produces intermediate data.\n",
"metadata": {}, "\n",
"source": [ "**Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** "
"#### Define a Step that consumes intermediate data and produces intermediate data\n", ]
"In this step, we define a step that consumes an intermediate data and produces intermediate data.\n", },
"\n", {
"**Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** " "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "# step5 to use the intermediate data produced by step4\n",
"metadata": {}, "# This step also produces an output processed_data2\n",
"outputs": [], "processed_data2 = PipelineData(\"processed_data2\", datastore=def_blob_store)\n",
"source": [ "\n",
"# step5 to use the intermediate data produced by step4\n", "extractStep = PythonScriptStep(\n",
"# This step also produces an output processed_data2\n", " script_name=\"extract.py\",\n",
"processed_data2 = PipelineData(\"processed_data2\", datastore=def_blob_store)\n", " arguments=[\"--input_extract\", processed_data1, \"--output_extract\", processed_data2],\n",
"\n", " inputs=[processed_data1],\n",
"extractStep = PythonScriptStep(\n", " outputs=[processed_data2],\n",
" script_name=\"extract.py\",\n", " compute_target=aml_compute, \n",
" arguments=[\"--input_extract\", processed_data1, \"--output_extract\", processed_data2],\n", " source_directory=project_folder)\n",
" inputs=[processed_data1],\n", "print(\"extractStep created\")"
" outputs=[processed_data2],\n", ]
" compute_target=aml_compute, \n", },
" source_directory=project_folder)\n", {
"print(\"extractStep created\")" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "#### Define a Step that consumes multiple intermediate data and produces intermediate data\n",
"cell_type": "markdown", "In this step, we define a step that consumes multiple intermediate data and produces intermediate data.\n",
"metadata": {}, "\n",
"source": [ "**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**"
"#### Define a Step that consumes multiple intermediate data and produces intermediate data\n", ]
"In this step, we define a step that consumes multiple intermediate data and produces intermediate data.\n", },
"\n", {
"**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "# Now define step6 that takes two inputs (both intermediate data), and produce an output\n",
"metadata": {}, "processed_data3 = PipelineData(\"processed_data3\", datastore=def_blob_store)\n",
"outputs": [], "\n",
"source": [ "compareStep = PythonScriptStep(\n",
"# Now define step6 that takes two inputs (both intermediate data), and produce an output\n", " script_name=\"compare.py\",\n",
"processed_data3 = PipelineData(\"processed_data3\", datastore=def_blob_store)\n", " arguments=[\"--compare_data1\", processed_data1, \"--compare_data2\", processed_data2, \"--output_compare\", processed_data3],\n",
"\n", " inputs=[processed_data1, processed_data2],\n",
"compareStep = PythonScriptStep(\n", " outputs=[processed_data3], \n",
" script_name=\"compare.py\",\n", " compute_target=aml_compute, \n",
" arguments=[\"--compare_data1\", processed_data1, \"--compare_data2\", processed_data2, \"--output_compare\", processed_data3],\n", " source_directory=project_folder)\n",
" inputs=[processed_data1, processed_data2],\n", "print(\"compareStep created\")"
" outputs=[processed_data3], \n", ]
" compute_target=aml_compute, \n", },
" source_directory=project_folder)\n", {
"print(\"compareStep created\")" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "#### Build the pipeline"
"cell_type": "markdown", ]
"metadata": {}, },
"source": [ {
"#### Build the pipeline" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "pipeline1 = Pipeline(workspace=ws, steps=[compareStep])\n",
"metadata": {}, "print (\"Pipeline is built\")\n",
"outputs": [], "\n",
"source": [ "pipeline1.validate()\n",
"pipeline1 = Pipeline(workspace=ws, steps=[compareStep])\n", "print(\"Simple validation complete\") "
"print (\"Pipeline is built\")\n", ]
"\n", },
"pipeline1.validate()\n", {
"print(\"Simple validation complete\") " "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "pipeline_run1 = Experiment(ws, 'Data_dependency').submit(pipeline1)\n",
"metadata": {}, "print(\"Pipeline is submitted for execution\")"
"outputs": [], ]
"source": [ },
"pipeline_run1 = Experiment(ws, 'Data_dependency').submit(pipeline1)\n", {
"print(\"Pipeline is submitted for execution\")" "cell_type": "code",
] "execution_count": null,
}, "metadata": {},
{ "outputs": [],
"cell_type": "code", "source": [
"execution_count": null, "RunDetails(pipeline_run1).show()"
"metadata": {}, ]
"outputs": [], },
"source": [ {
"RunDetails(pipeline_run1).show()" "cell_type": "markdown",
] "metadata": {},
}, "source": [
{ "# Next: Publishing the Pipeline and calling it from the REST endpoint\n",
"cell_type": "markdown", "See this [notebook](./aml-pipelines-publish-and-run-using-rest-endpoint.ipynb) to understand how the pipeline is published and you can call the REST endpoint to run the pipeline."
"metadata": {}, ]
"source": [ }
"# Next: Publishing the Pipeline and calling it from the REST endpoint\n",
"See this [notebook](./aml-pipelines-publish-and-run-using-rest-endpoint.ipynb) to understand how the pipeline is published and you can call the REST endpoint to run the pipeline."
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3", "authors": [
"language": "python", {
"name": "python3" "name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,106 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import numpy as np
import argparse
import os
import tensorflow as tf
from azureml.core import Run
from utils import load_data
print("TensorFlow version:", tf.VERSION)
parser = argparse.ArgumentParser()
parser.add_argument('--data-folder', type=str, dest='data_folder', help='data folder mounting point')
parser.add_argument('--batch-size', type=int, dest='batch_size', default=50, help='mini batch size for training')
parser.add_argument('--first-layer-neurons', type=int, dest='n_hidden_1', default=100,
help='# of neurons in the first layer')
parser.add_argument('--second-layer-neurons', type=int, dest='n_hidden_2', default=100,
help='# of neurons in the second layer')
parser.add_argument('--learning-rate', type=float, dest='learning_rate', default=0.01, help='learning rate')
args = parser.parse_args()
data_folder = os.path.join(args.data_folder, 'mnist')
print('training dataset is stored here:', data_folder)
X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0
X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0
y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)
y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape, sep='\n')
training_set_size = X_train.shape[0]
n_inputs = 28 * 28
n_h1 = args.n_hidden_1
n_h2 = args.n_hidden_2
n_outputs = 10
learning_rate = args.learning_rate
n_epochs = 50
batch_size = args.batch_size
with tf.name_scope('network'):
# construct the DNN
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name='X')
y = tf.placeholder(tf.int64, shape=(None), name='y')
h1 = tf.layers.dense(X, n_h1, activation=tf.nn.relu, name='h1')
h2 = tf.layers.dense(h1, n_h2, activation=tf.nn.relu, name='h2')
output = tf.layers.dense(h2, n_outputs, name='output')
with tf.name_scope('train'):
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=output)
loss = tf.reduce_mean(cross_entropy, name='loss')
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
with tf.name_scope('eval'):
correct = tf.nn.in_top_k(output, y, 1)
acc_op = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
# start an Azure ML run
run = Run.get_context()
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
# randomly shuffle training set
indices = np.random.permutation(training_set_size)
X_train = X_train[indices]
y_train = y_train[indices]
# batch index
b_start = 0
b_end = b_start + batch_size
for _ in range(training_set_size // batch_size):
# get a batch
X_batch, y_batch = X_train[b_start: b_end], y_train[b_start: b_end]
# update batch index for the next batch
b_start = b_start + batch_size
b_end = min(b_start + batch_size, training_set_size)
# train
sess.run(train_op, feed_dict={X: X_batch, y: y_batch})
# evaluate training set
acc_train = acc_op.eval(feed_dict={X: X_batch, y: y_batch})
# evaluate validation set
acc_val = acc_op.eval(feed_dict={X: X_test, y: y_test})
# log accuracies
run.log('training_acc', np.float(acc_train))
run.log('validation_acc', np.float(acc_val))
print(epoch, '-- Training accuracy:', acc_train, '\b Validation accuracy:', acc_val)
y_hat = np.argmax(output.eval(feed_dict={X: X_test}), axis=1)
run.log('final_acc', np.float(acc_val))
os.makedirs('./outputs/model', exist_ok=True)
# files saved in the "./outputs" folder are automatically uploaded into run history
saver.save(sess, './outputs/model/mnist-tf.model')

View File

@@ -0,0 +1,27 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import gzip
import numpy as np
import struct
# load compressed MNIST gz files and return numpy arrays
def load_data(filename, label=False):
with gzip.open(filename) as gz:
struct.unpack('I', gz.read(4))
n_items = struct.unpack('>I', gz.read(4))
if not label:
n_rows = struct.unpack('>I', gz.read(4))[0]
n_cols = struct.unpack('>I', gz.read(4))[0]
res = np.frombuffer(gz.read(n_items[0] * n_rows * n_cols), dtype=np.uint8)
res = res.reshape(n_items[0], n_rows * n_cols)
else:
res = np.frombuffer(gz.read(n_items[0]), dtype=np.uint8)
res = res.reshape(n_items[0], 1)
return res
# one-hot encode a 1-D array
def one_hot_encode(array, num_of_classes):
return np.eye(num_of_classes)[array.reshape(-1)]

View File

@@ -0,0 +1,253 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License.\n",
"\n",
"## Authentication in Azure Machine Learning\n",
"\n",
"This notebook shows you how to authenticate to your Azure ML Workspace using\n",
"\n",
" 1. Interactive Login Authentication\n",
" 2. Azure CLI Authentication\n",
" 3. Service Principal Authentication\n",
" \n",
"The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The Service Principal authentication is suitable for automated workflows, for example as part of Azure Devops build."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Interactive Authentication\n",
"\n",
"Interactive authentication is the default mode when using Azure ML SDK.\n",
"\n",
"When you connect to your workspace using workspace.from_config, you will get an interactive login dialog."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Also, if you explicitly specify the subscription ID, resource group and resource group, you will get the dialog."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error\n",
"\n",
"```\n",
"AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...\n",
"```\n",
"\n",
"check that the you used correct login and entered the correct subscription ID."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```\n",
"\n",
"In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with tenant ID as argument ([see instructions how to obtain tenant Id](#get-tenant-id))."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
"\n",
"interactive_auth = InteractiveLoginAuthentication(tenant_id=\"my-tenant-id\")\n",
"\n",
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=interactive_auth)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Azure CLI Authentication\n",
"\n",
"If you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.\n",
"\n",
"Note that interactive authentication described above won't use existing Azure CLI auth tokens. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.authentication import AzureCliAuthentication\n",
"\n",
"cli_auth = AzureCliAuthentication()\n",
"\n",
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=cli_auth)\n",
"\n",
"print(\"Found workspace {} at location {}\".format(ws.name, ws.location))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Service Principal Authentication\n",
"\n",
"When setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.\n",
"\n",
"Note that you must have administrator privileges over the Azure subscription to complete these steps.\n",
"\n",
"The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application registration**, give your service principal a name, for example _my-svc-principal_. You can leave application type as is, and specify a dummy value for Sign-on URL, such as _https://invalid_.\n",
"\n",
"Then click **Create**.\n",
"\n",
"![service principal creation]<img src=\"images/svc-pr-1.PNG\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The next step is to obtain the _Application ID_ (also called username) and create _password_ for the service principal.\n",
"\n",
"From the page for your newly created service principal, copy the _Application ID_. Then select **Settings** and **Keys**, write a description for your key, and select duration. Then click **Save**, and copy the _password_ to a secure location.\n",
"\n",
"![application id and password](images/svc-pr-2.PNG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id =\"get-tenant-id\"></a>\n",
"\n",
"Also, you need to obtain the tenant ID of your Azure subscription. Go back to **Azure Active Directory**, select **Properties** and copy _Directory ID_.\n",
"\n",
"![tenant id](images/svc-pr-3.PNG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. \n",
"\n",
"Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.\n",
"\n",
"![add role](images/svc-pr-4.PNG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.\n",
"\n",
"**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.\n",
"\n",
"```\n",
"$env:AZUREML_PASSWORD = \"my-password\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from azureml.core.authentication import ServicePrincipalAuthentication\n",
"\n",
"svc_pr_password = os.environ.get(\"AZUREML_PASSWORD\")\n",
"\n",
"svc_pr = ServicePrincipalAuthentication(\n",
" tenant_id=\"my-tenant-id\",\n",
" service_principal_id=\"my-application-id\",\n",
" service_principal_password=svc_pr_password)\n",
"\n",
"\n",
"ws = Workspace(\n",
" subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=svc_pr\n",
" )\n",
"\n",
"print(\"Found workspace {} at location {}\".format(ws.name, ws.location))"
]
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

View File

@@ -1,8 +1,18 @@
## Azure Machine Learning service training examples ## Azure Machine Learning service training examples
These examples show you: These examples show you:
* Distributed training of models on Machine Learning Compute cluster
* Hyperparameter tuning at scale 1. [How to use the Estimator pattern in Azure ML](how-to-use-estimator)
* Using Tensorboard with Azure ML Python SDK. 2. [Train using TensorFlow Estimator and tune hyperparameters using Hyperdrive](train-hyperparameter-tune-deploy-with-tensorflow)
3. [Train using Pytorch Estimator and tune hyperparameters using Hyperdrive](train-hyperparameter-tune-deploy-with-pytorch)
4. [Train using Keras and tune hyperparameters using Hyperdrive](train-hyperparameter-tune-deploy-with-keras)
5. [Train using Chainer Estimator and tune hyperparameters using Hyperdrive](train-hyperparameter-tune-deploy-with-chainer)
6. [Distributed training using TensorFlow and Parameter Server](distributed-tensorflow-with-parameter-server)
7. [Distributed training using TensorFlow and Horovod](distributed-tensorflow-with-horovod)
8. [Distributed training using Pytorch and Horovod](distributed-pytorch-with-horovod)
9. [Distributed training using CNTK and custom Docker image](distributed-cntk-with-custom-docker)
10. [Distributed training using Chainer](distributed-chainer)
11. [Export run history records to Tensorboard](export-run-history-to-tensorboard)
12. [Use TensorBoard to monitor training execution](tensorboard)
Learn more about how to use `Estimator` class to [train deep neural networks with Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-ml-models). Learn more about how to use `Estimator` class to [train deep neural networks with Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-ml-models).

View File

@@ -0,0 +1,153 @@
import argparse
import chainer
import chainer.cuda
import chainer.functions as F
import chainer.links as L
from chainer import training
from chainer.training import extensions
import chainermn
import chainermn.datasets
import chainermn.functions
chainer.disable_experimental_feature_warning = True
class MLP0SubA(chainer.Chain):
def __init__(self, comm, n_out):
super(MLP0SubA, self).__init__(
l1=L.Linear(784, n_out))
def __call__(self, x):
return F.relu(self.l1(x))
class MLP0SubB(chainer.Chain):
def __init__(self, comm):
super(MLP0SubB, self).__init__()
def __call__(self, y):
return y
class MLP0(chainermn.MultiNodeChainList):
# Model on worker 0.
def __init__(self, comm, n_out):
super(MLP0, self).__init__(comm=comm)
self.add_link(MLP0SubA(comm, n_out), rank_in=None, rank_out=1)
self.add_link(MLP0SubB(comm), rank_in=1, rank_out=None)
class MLP1Sub(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP1Sub, self).__init__(
l2=L.Linear(None, n_units),
l3=L.Linear(None, n_out))
def __call__(self, h0):
h1 = F.relu(self.l2(h0))
return self.l3(h1)
class MLP1(chainermn.MultiNodeChainList):
# Model on worker 1.
def __init__(self, comm, n_units, n_out):
super(MLP1, self).__init__(comm=comm)
self.add_link(MLP1Sub(n_units, n_out), rank_in=0, rank_out=0)
def main():
parser = argparse.ArgumentParser(
description='ChainerMN example: pipelined neural network')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--gpu', '-g', action='store_true',
help='Use GPU')
parser.add_argument('--out', '-o', default='result',
help='Directory to output the result')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
args = parser.parse_args()
# Prepare ChainerMN communicator.
if args.gpu:
comm = chainermn.create_communicator('hierarchical')
data_axis, model_axis = comm.rank % 2, comm.rank // 2
data_comm = comm.split(data_axis, comm.rank)
model_comm = comm.split(model_axis, comm.rank)
device = comm.intra_rank
else:
comm = chainermn.create_communicator('naive')
data_axis, model_axis = comm.rank % 2, comm.rank // 2
data_comm = comm.split(data_axis, comm.rank)
model_comm = comm.split(model_axis, comm.rank)
device = -1
if model_comm.size != 2:
raise ValueError(
'This example can only be executed on the even number'
'of processes.')
if comm.rank == 0:
print('==========================================')
if args.gpu:
print('Using GPUs')
print('Num unit: {}'.format(args.unit))
print('Num Minibatch-size: {}'.format(args.batchsize))
print('Num epoch: {}'.format(args.epoch))
print('==========================================')
if data_axis == 0:
model = L.Classifier(MLP0(model_comm, args.unit))
elif data_axis == 1:
model = MLP1(model_comm, args.unit, 10)
if device >= 0:
chainer.cuda.get_device_from_id(device).use()
model.to_gpu()
optimizer = chainermn.create_multi_node_optimizer(
chainer.optimizers.Adam(), data_comm)
optimizer.setup(model)
# Original dataset on worker 0 and 1.
# Datasets of worker 0 and 1 are split and distributed to all workers.
if model_axis == 0:
train, test = chainer.datasets.get_mnist()
if data_axis == 1:
train = chainermn.datasets.create_empty_dataset(train)
test = chainermn.datasets.create_empty_dataset(test)
else:
train, test = None, None
train = chainermn.scatter_dataset(train, data_comm, shuffle=True)
test = chainermn.scatter_dataset(test, data_comm, shuffle=True)
train_iter = chainer.iterators.SerialIterator(
train, args.batchsize, shuffle=False)
test_iter = chainer.iterators.SerialIterator(
test, args.batchsize, repeat=False, shuffle=False)
updater = training.StandardUpdater(train_iter, optimizer, device=device)
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
evaluator = extensions.Evaluator(test_iter, model, device=device)
evaluator = chainermn.create_multi_node_evaluator(evaluator, data_comm)
trainer.extend(evaluator)
# Some display and output extentions are necessary only for worker 0.
if comm.rank == 0:
trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
trainer.extend(extensions.ProgressBar())
trainer.run()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,315 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Distributed Chainer\n",
"In this tutorial, you will run a Chainer training example on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using ChainerMN distributed training across a GPU cluster."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"Diagnostics"
]
},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize workspace\n",
"\n",
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create or attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource. Specifically, the below code creates an `STANDARD_NC6` GPU cluster that autoscales from `0` to `4` nodes.\n",
"\n",
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace, this code will skip the creation process.\n",
"\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# choose a name for your cluster\n",
"cluster_name = \"gpucluster\"\n",
"\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target.')\n",
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',\n",
" max_nodes=4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# use get_status() to get a detailed status for the current AmlCompute. \n",
"print(compute_target.get_status().serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above code creates GPU compute. If you instead want to create CPU compute, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train model on the remote compute\n",
"Now that we have the AmlCompute ready to go, let's run our distributed training job."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a project directory\n",
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"project_folder = './chainer-distr'\n",
"os.makedirs(project_folder, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare training script\n",
"Now you will need to create your training script. In this tutorial, the script for distributed training of MNIST is already provided for you at `train_mnist.py`. In practice, you should be able to take any custom Chainer training script as is and run it with Azure ML without having to modify your code."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once your script is ready, copy the training script `train_mnist.py` into the project directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import shutil\n",
"\n",
"shutil.copy('train_mnist.py', project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create an experiment\n",
"Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed Chainer tutorial. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"\n",
"experiment_name = 'chainer-distr'\n",
"experiment = Experiment(ws, name=experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a Chainer estimator\n",
"The Azure ML SDK's Chainer estimator enables you to easily submit Chainer training jobs for both single-node and distributed runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.dnn import Chainer\n",
"\n",
"estimator = Chainer(source_directory=project_folder,\n",
" compute_target=compute_target,\n",
" entry_script='train_mnist.py',\n",
" node_count=2,\n",
" process_count_per_node=1,\n",
" distributed_backend='mpi',\n",
" use_gpu=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, Chainer and its dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `Chainer` constructor's `pip_packages` or `conda_packages` parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit job\n",
"Run your experiment by submitting your estimator object. Note that this call is asynchronous."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run = experiment.submit(estimator)\n",
"print(run)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Monitor your run\n",
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. You can see that the widget automatically plots and visualizes the loss metric that we logged to the Azure ML run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output=True)"
]
}
],
"metadata": {
"authors": [
{
"name": "minxia"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"msauthor": "minxia"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,125 @@
# Official ChainerMN example taken from
# https://github.com/chainer/chainer/blob/master/examples/chainermn/mnist/train_mnist.py
from __future__ import print_function
import argparse
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import training
from chainer.training import extensions
import chainermn
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__(
# the size of the inputs to each layer will be inferred
l1=L.Linear(784, n_units), # n_in -> n_units
l2=L.Linear(n_units, n_units), # n_units -> n_units
l3=L.Linear(n_units, n_out), # n_units -> n_out
)
def __call__(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
def main():
parser = argparse.ArgumentParser(description='ChainerMN example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--communicator', type=str,
default='non_cuda_aware', help='Type of communicator')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--gpu', '-g', default=True,
help='Use GPU')
parser.add_argument('--out', '-o', default='result',
help='Directory to output the result')
parser.add_argument('--resume', '-r', default='',
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
args = parser.parse_args()
# Prepare ChainerMN communicator.
if args.gpu:
if args.communicator == 'naive':
print("Error: 'naive' communicator does not support GPU.\n")
exit(-1)
comm = chainermn.create_communicator(args.communicator)
device = comm.intra_rank
else:
if args.communicator != 'naive':
print('Warning: using naive communicator '
'because only naive supports CPU-only execution')
comm = chainermn.create_communicator('naive')
device = -1
if comm.rank == 0:
print('==========================================')
print('Num process (COMM_WORLD): {}'.format(comm.size))
if args.gpu:
print('Using GPUs')
print('Using {} communicator'.format(args.communicator))
print('Num unit: {}'.format(args.unit))
print('Num Minibatch-size: {}'.format(args.batchsize))
print('Num epoch: {}'.format(args.epoch))
print('==========================================')
model = L.Classifier(MLP(args.unit, 10))
if device >= 0:
chainer.cuda.get_device_from_id(device).use()
model.to_gpu()
# Create a multi node optimizer from a standard Chainer optimizer.
optimizer = chainermn.create_multi_node_optimizer(
chainer.optimizers.Adam(), comm)
optimizer.setup(model)
# Split and distribute the dataset. Only worker 0 loads the whole dataset.
# Datasets of worker 0 are evenly split and distributed to all workers.
if comm.rank == 0:
train, test = chainer.datasets.get_mnist()
else:
train, test = None, None
train = chainermn.scatter_dataset(train, comm, shuffle=True)
test = chainermn.scatter_dataset(test, comm, shuffle=True)
train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
test_iter = chainer.iterators.SerialIterator(test, args.batchsize,
repeat=False, shuffle=False)
updater = training.StandardUpdater(train_iter, optimizer, device=device)
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
# Create a multi node evaluator from a standard Chainer evaluator.
evaluator = extensions.Evaluator(test_iter, model, device=device)
evaluator = chainermn.create_multi_node_evaluator(evaluator, comm)
trainer.extend(evaluator)
# Some display and output extensions are necessary only for one worker.
# (Otherwise, there would just be repeated outputs.)
if comm.rank == 0:
trainer.extend(extensions.dump_graph('main/loss'))
trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
trainer.extend(extensions.ProgressBar())
if args.resume:
chainer.serializers.load_npz(args.resume, trainer)
trainer.run()
if __name__ == '__main__':
main()

View File

@@ -1,394 +1,394 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Distributed CNTK using custom docker images\n", "# Distributed CNTK using custom docker images\n",
"In this tutorial, you will train a CNTK model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using a custom docker image and distributed training." "In this tutorial, you will train a CNTK model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using a custom docker image and distributed training."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites\n", "## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n", "* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb]() notebook to:\n", "* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n", " * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)" " * create a workspace and its configuration file (`config.json`)"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Check core SDK version number\n", "# Check core SDK version number\n",
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"SDK version:\", azureml.core.VERSION)" "print(\"SDK version:\", azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Diagnostics\n", "## Diagnostics\n",
"Opt-in diagnostics for better experience, quality, and security of future releases." "Opt-in diagnostics for better experience, quality, and security of future releases."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": { "metadata": {
"tags": [ "tags": [
"Diagnostics" "Diagnostics"
] ]
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.telemetry import set_diagnostics_collection\n", "from azureml.telemetry import set_diagnostics_collection\n",
"\n", "\n",
"set_diagnostics_collection(send_diagnostics=True)" "set_diagnostics_collection(send_diagnostics=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Initialize workspace\n", "## Initialize workspace\n",
"\n", "\n",
"Initialize a [Workspace](https://review.docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture?branch=release-ignite-aml#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`." "Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.workspace import Workspace\n", "from azureml.core.workspace import Workspace\n",
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n", "print('Workspace name: ' + ws.name,\n",
" 'Azure region: ' + ws.location, \n", " 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n", " 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')" " 'Resource group: ' + ws.resource_group, sep='\\n')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Create or Attach existing AmlCompute\n", "## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n", "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
"\n", "\n",
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", "**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"\n", "\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n", "from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n", "from azureml.core.compute_target import ComputeTargetException\n",
"\n", "\n",
"# choose a name for your cluster\n", "# choose a name for your cluster\n",
"cluster_name = \"gpucluster\"\n", "cluster_name = \"gpucluster\"\n",
"\n", "\n",
"try:\n", "try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n", " compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target.')\n", " print('Found existing compute target.')\n",
"except ComputeTargetException:\n", "except ComputeTargetException:\n",
" print('Creating a new compute target...')\n", " print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',\n", " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',\n",
" max_nodes=4)\n", " max_nodes=4)\n",
"\n", "\n",
" # create the cluster\n", " # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n", "\n",
" compute_target.wait_for_completion(show_output=True)\n", " compute_target.wait_for_completion(show_output=True)\n",
"\n", "\n",
"# Use the 'status' property to get a detailed status for the current AmlCompute. \n", "# use get_status() to get a detailed status for the current AmlCompute\n",
"print(compute_target.status.serialize())" "print(compute_target.get_status().serialize())"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Upload training data\n", "## Upload training data\n",
"For this tutorial, we will be using the MNIST dataset.\n", "For this tutorial, we will be using the MNIST dataset.\n",
"\n", "\n",
"First, let's download the dataset. We've included the `install_mnist.py` script to download the data and convert it to a CNTK-supported format. Our data files will get written to a directory named `'mnist'`." "First, let's download the dataset. We've included the `install_mnist.py` script to download the data and convert it to a CNTK-supported format. Our data files will get written to a directory named `'mnist'`."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import install_mnist\n", "import install_mnist\n",
"\n", "\n",
"install_mnist.main('mnist')" "install_mnist.main('mnist')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"To make the data accessible for remote training, you will need to upload the data from your local machine to the cloud. AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data, and interact with it from your remote compute targets. \n", "To make the data accessible for remote training, you will need to upload the data from your local machine to the cloud. AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data, and interact with it from your remote compute targets. \n",
"\n", "\n",
"Each workspace is associated with a default datastore. In this tutorial, we will upload the training data to this default datastore, which we will then mount on the remote compute for training in the next section." "Each workspace is associated with a default datastore. In this tutorial, we will upload the training data to this default datastore, which we will then mount on the remote compute for training in the next section."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"ds = ws.get_default_datastore()\n", "ds = ws.get_default_datastore()\n",
"print(ds.datastore_type, ds.account_name, ds.container_name)" "print(ds.datastore_type, ds.account_name, ds.container_name)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"The following code will upload the training data to the path `./mnist` on the default datastore." "The following code will upload the training data to the path `./mnist` on the default datastore."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"ds.upload(src_dir='./mnist', target_path='./mnist')" "ds.upload(src_dir='./mnist', target_path='./mnist')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Now let's get a reference to the path on the datastore with the training data. We can do so using the `path` method. In the next section, we can then pass this reference to our training script's `--data_dir` argument. " "Now let's get a reference to the path on the datastore with the training data. We can do so using the `path` method. In the next section, we can then pass this reference to our training script's `--data_dir` argument. "
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"path_on_datastore = 'mnist'\n", "path_on_datastore = 'mnist'\n",
"ds_data = ds.path(path_on_datastore)\n", "ds_data = ds.path(path_on_datastore)\n",
"print(ds_data)" "print(ds_data)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Train model on the remote compute\n", "## Train model on the remote compute\n",
"Now that we have the cluster ready to go, let's run our distributed training job." "Now that we have the cluster ready to go, let's run our distributed training job."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create a project directory\n", "### Create a project directory\n",
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on." "Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import os\n", "import os\n",
"\n", "\n",
"project_folder = './cntk-distr'\n", "project_folder = './cntk-distr'\n",
"os.makedirs(project_folder, exist_ok=True)" "os.makedirs(project_folder, exist_ok=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copy the training script `cntk_distr_mnist.py` into this project directory." "Copy the training script `cntk_distr_mnist.py` into this project directory."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import shutil\n", "import shutil\n",
"\n", "\n",
"shutil.copy('cntk_distr_mnist.py', project_folder)" "shutil.copy('cntk_distr_mnist.py', project_folder)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create an experiment\n", "### Create an experiment\n",
"Create an [experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed CNTK tutorial. " "Create an [experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed CNTK tutorial. "
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Experiment\n", "from azureml.core import Experiment\n",
"\n", "\n",
"experiment_name = 'cntk-distr'\n", "experiment_name = 'cntk-distr'\n",
"experiment = Experiment(ws, name=experiment_name)" "experiment = Experiment(ws, name=experiment_name)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create an Estimator\n", "### Create an Estimator\n",
"The AML SDK's base Estimator enables you to easily submit custom scripts for both single-node and distributed runs. You should this generic estimator for training code using frameworks such as sklearn or CNTK that don't have corresponding custom estimators. For more information on using the generic estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-ml-models)." "The AML SDK's base Estimator enables you to easily submit custom scripts for both single-node and distributed runs. You should this generic estimator for training code using frameworks such as sklearn or CNTK that don't have corresponding custom estimators. For more information on using the generic estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-ml-models)."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.train.estimator import *\n", "from azureml.train.estimator import Estimator\n",
"\n", "\n",
"script_params = {\n", "script_params = {\n",
" '--num_epochs': 20,\n", " '--num_epochs': 20,\n",
" '--data_dir': ds_data.as_mount(),\n", " '--data_dir': ds_data.as_mount(),\n",
" '--output_dir': './outputs'\n", " '--output_dir': './outputs'\n",
"}\n", "}\n",
"\n", "\n",
"estimator = Estimator(source_directory=project_folder,\n", "estimator = Estimator(source_directory=project_folder,\n",
" compute_target=compute_target,\n", " compute_target=compute_target,\n",
" entry_script='cntk_distr_mnist.py',\n", " entry_script='cntk_distr_mnist.py',\n",
" script_params=script_params,\n", " script_params=script_params,\n",
" node_count=2,\n", " node_count=2,\n",
" process_count_per_node=1,\n", " process_count_per_node=1,\n",
" distributed_backend='mpi', \n", " distributed_backend='mpi',\n",
" pip_packages=['cntk-gpu==2.6'],\n", " pip_packages=['cntk-gpu==2.6'],\n",
" custom_docker_base_image='microsoft/mmlspark:gpu-0.12',\n", " custom_docker_image='microsoft/mmlspark:gpu-0.12',\n",
" use_gpu=True)" " use_gpu=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"We would like to train our model using a [pre-built Docker container](https://hub.docker.com/r/microsoft/mmlspark/). To do so, specify the name of the docker image to the argument `custom_docker_base_image`. You can only provide images available in public docker repositories such as Docker Hub using this argument. To use an image from a private docker repository, use the constructor's `environment_definition` parameter instead. Finally, we provide the `cntk` package to `pip_packages` to install CNTK 2.6 on our custom image.\n", "We would like to train our model using a [pre-built Docker container](https://hub.docker.com/r/microsoft/mmlspark/). To do so, specify the name of the docker image to the argument `custom_docker_image`. Finally, we provide the `cntk` package to `pip_packages` to install CNTK 2.6 on our custom image.\n",
"\n", "\n",
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to run distributed CNTK, which uses MPI, you must provide the argument `distributed_backend='mpi'`." "The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to run distributed CNTK, which uses MPI, you must provide the argument `distributed_backend='mpi'`."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Submit job\n", "### Submit job\n",
"Run your experiment by submitting your estimator object. Note that this call is asynchronous." "Run your experiment by submitting your estimator object. Note that this call is asynchronous."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"run = experiment.submit(estimator)\n", "run = experiment.submit(estimator)\n",
"print(run)" "print(run)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Monitor your run\n", "### Monitor your run\n",
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes." "You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.widgets import RunDetails\n", "from azureml.widgets import RunDetails\n",
"\n", "\n",
"RunDetails(run).show()" "RunDetails(run).show()"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Alternatively, you can block until the script has completed training before running more code." "Alternatively, you can block until the script has completed training before running more code."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"run.wait_for_completion(show_output=True)" "run.wait_for_completion(show_output=True)"
] ]
} }
],
"metadata": {
"authors": [
{
"name": "minxia"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "minxia"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,335 +1,335 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Distributed PyTorch with Horovod\n", "# Distributed PyTorch with Horovod\n",
"In this tutorial, you will train a PyTorch model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using distributed training via [Horovod](https://github.com/uber/horovod) across a GPU cluster." "In this tutorial, you will train a PyTorch model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using distributed training via [Horovod](https://github.com/uber/horovod) across a GPU cluster."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites\n", "## Prerequisites\n",
"* Go through the [Configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`\n", "* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`\n",
"* Review the [tutorial](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) on single-node PyTorch training using Azure Machine Learning" "* Review the [tutorial](../train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) on single-node PyTorch training using Azure Machine Learning"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Check core SDK version number\n", "# Check core SDK version number\n",
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"SDK version:\", azureml.core.VERSION)" "print(\"SDK version:\", azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Diagnostics\n", "## Diagnostics\n",
"Opt-in diagnostics for better experience, quality, and security of future releases." "Opt-in diagnostics for better experience, quality, and security of future releases."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": { "metadata": {
"tags": [ "tags": [
"Diagnostics" "Diagnostics"
] ]
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.telemetry import set_diagnostics_collection\n", "from azureml.telemetry import set_diagnostics_collection\n",
"\n", "\n",
"set_diagnostics_collection(send_diagnostics=True)" "set_diagnostics_collection(send_diagnostics=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Initialize workspace\n", "## Initialize workspace\n",
"\n", "\n",
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`." "Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.workspace import Workspace\n", "from azureml.core.workspace import Workspace\n",
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n", "print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n", " 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n", " 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')" " 'Resource group: ' + ws.resource_group, sep='\\n')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Create or attach existing AmlCompute\n", "## Create or attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource. Specifically, the below code creates an `STANDARD_NC6` GPU cluster that autoscales from `0` to `4` nodes.\n", "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource. Specifically, the below code creates an `STANDARD_NC6` GPU cluster that autoscales from `0` to `4` nodes.\n",
"\n", "\n",
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace, this code will skip the creation process.\n", "**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace, this code will skip the creation process.\n",
"\n", "\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n", "from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n", "from azureml.core.compute_target import ComputeTargetException\n",
"\n", "\n",
"# choose a name for your cluster\n", "# choose a name for your cluster\n",
"cluster_name = \"gpucluster\"\n", "cluster_name = \"gpucluster\"\n",
"\n", "\n",
"try:\n", "try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n", " compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target.')\n", " print('Found existing compute target.')\n",
"except ComputeTargetException:\n", "except ComputeTargetException:\n",
" print('Creating a new compute target...')\n", " print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',\n", " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',\n",
" max_nodes=4)\n", " max_nodes=4)\n",
"\n", "\n",
" # create the cluster\n", " # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n", "\n",
" compute_target.wait_for_completion(show_output=True)\n", " compute_target.wait_for_completion(show_output=True)\n",
"\n", "\n",
"# Use the 'status' property to get a detailed status for the current AmlCompute. \n", "# use get_status() to get a detailed status for the current AmlCompute. \n",
"print(compute_target.status.serialize())" "print(compute_target.get_status().serialize())"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"The above code creates GPU compute. If you instead want to create CPU compute, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`." "The above code creates GPU compute. If you instead want to create CPU compute, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Train model on the remote compute\n", "## Train model on the remote compute\n",
"Now that we have the AmlCompute ready to go, let's run our distributed training job." "Now that we have the AmlCompute ready to go, let's run our distributed training job."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create a project directory\n", "### Create a project directory\n",
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on." "Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import os\n", "import os\n",
"\n", "\n",
"project_folder = './pytorch-distr-hvd'\n", "project_folder = './pytorch-distr-hvd'\n",
"os.makedirs(project_folder, exist_ok=True)" "os.makedirs(project_folder, exist_ok=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Prepare training script\n", "### Prepare training script\n",
"Now you will need to create your training script. In this tutorial, the script for distributed training of MNIST is already provided for you at `pytorch_horovod_mnist.py`. In practice, you should be able to take any custom PyTorch training script as is and run it with Azure ML without having to modify your code.\n", "Now you will need to create your training script. In this tutorial, the script for distributed training of MNIST is already provided for you at `pytorch_horovod_mnist.py`. In practice, you should be able to take any custom PyTorch training script as is and run it with Azure ML without having to modify your code.\n",
"\n", "\n",
"However, if you would like to use Azure ML's [metric logging](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#logging) capabilities, you will have to add a small amount of Azure ML logic inside your training script. In this example, at each logging interval, we will log the loss for that minibatch to our Azure ML run.\n", "However, if you would like to use Azure ML's [metric logging](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#logging) capabilities, you will have to add a small amount of Azure ML logic inside your training script. In this example, at each logging interval, we will log the loss for that minibatch to our Azure ML run.\n",
"\n", "\n",
"To do so, in `pytorch_horovod_mnist.py`, we will first access the Azure ML `Run` object within the script:\n", "To do so, in `pytorch_horovod_mnist.py`, we will first access the Azure ML `Run` object within the script:\n",
"```Python\n", "```Python\n",
"from azureml.core.run import Run\n", "from azureml.core.run import Run\n",
"run = Run.get_context()\n", "run = Run.get_context()\n",
"```\n", "```\n",
"Later within the script, we log the loss metric to our run:\n", "Later within the script, we log the loss metric to our run:\n",
"```Python\n", "```Python\n",
"run.log('loss', loss.item())\n", "run.log('loss', loss.item())\n",
"```" "```"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Once your script is ready, copy the training script `pytorch_horovod_mnist.py` into the project directory." "Once your script is ready, copy the training script `pytorch_horovod_mnist.py` into the project directory."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import shutil\n", "import shutil\n",
"\n", "\n",
"shutil.copy('pytorch_horovod_mnist.py', project_folder)" "shutil.copy('pytorch_horovod_mnist.py', project_folder)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create an experiment\n", "### Create an experiment\n",
"Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed PyTorch tutorial. " "Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed PyTorch tutorial. "
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Experiment\n", "from azureml.core import Experiment\n",
"\n", "\n",
"experiment_name = 'pytorch-distr-hvd'\n", "experiment_name = 'pytorch-distr-hvd'\n",
"experiment = Experiment(ws, name=experiment_name)" "experiment = Experiment(ws, name=experiment_name)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create a PyTorch estimator\n", "### Create a PyTorch estimator\n",
"The Azure ML SDK's PyTorch estimator enables you to easily submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch)." "The Azure ML SDK's PyTorch estimator enables you to easily submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch)."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.train.dnn import PyTorch\n", "from azureml.train.dnn import PyTorch\n",
"\n", "\n",
"estimator = PyTorch(source_directory=project_folder,\n", "estimator = PyTorch(source_directory=project_folder,\n",
" compute_target=compute_target,\n", " compute_target=compute_target,\n",
" entry_script='pytorch_horovod_mnist.py',\n", " entry_script='pytorch_horovod_mnist.py',\n",
" node_count=2,\n", " node_count=2,\n",
" process_count_per_node=1,\n", " process_count_per_node=1,\n",
" distributed_backend='mpi',\n", " distributed_backend='mpi',\n",
" use_gpu=True)" " use_gpu=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI/Horovod, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, PyTorch, Horovod and their dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `PyTorch` constructor's `pip_packages` or `conda_packages` parameters." "The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI/Horovod, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, PyTorch, Horovod and their dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `PyTorch` constructor's `pip_packages` or `conda_packages` parameters."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Submit job\n", "### Submit job\n",
"Run your experiment by submitting your estimator object. Note that this call is asynchronous." "Run your experiment by submitting your estimator object. Note that this call is asynchronous."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"run = experiment.submit(estimator)\n", "run = experiment.submit(estimator)\n",
"print(run)" "print(run)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Monitor your run\n", "### Monitor your run\n",
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. You can see that the widget automatically plots and visualizes the loss metric that we logged to the Azure ML run." "You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. You can see that the widget automatically plots and visualizes the loss metric that we logged to the Azure ML run."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.widgets import RunDetails\n", "from azureml.widgets import RunDetails\n",
"\n", "\n",
"RunDetails(run).show()" "RunDetails(run).show()"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Alternatively, you can block until the script has completed training before running more code." "Alternatively, you can block until the script has completed training before running more code."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"run.wait_for_completion(show_output=True) # this provides a verbose log" "run.wait_for_completion(show_output=True) # this provides a verbose log"
] ]
} }
],
"metadata": {
"authors": [
{
"name": "minxia"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "minxia"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"msauthor": "minxia"
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"msauthor": "minxia"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -50,7 +50,7 @@ if args.cuda:
torch.cuda.manual_seed(args.seed) torch.cuda.manual_seed(args.seed)
kwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {} kwargs = {}
train_dataset = \ train_dataset = \
datasets.MNIST('data-%d' % hvd.rank(), train=True, download=True, datasets.MNIST('data-%d' % hvd.rank(), train=True, download=True,
transform=transforms.Compose([ transform=transforms.Compose([

View File

@@ -1,404 +1,402 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Distributed Tensorflow with Horovod\n", "# Distributed Tensorflow with Horovod\n",
"In this tutorial, you will train a word2vec model in TensorFlow using distributed training via [Horovod](https://github.com/uber/horovod)." "In this tutorial, you will train a word2vec model in TensorFlow using distributed training via [Horovod](https://github.com/uber/horovod)."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites\n", "## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n", "* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n", "* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n", " * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)\n", " * create a workspace and its configuration file (`config.json`)\n",
"* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK" "* Review the [tutorial](../train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) on single-node TensorFlow training using the SDK"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Check core SDK version number\n", "# Check core SDK version number\n",
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"SDK version:\", azureml.core.VERSION)" "print(\"SDK version:\", azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Diagnostics\n", "## Diagnostics\n",
"Opt-in diagnostics for better experience, quality, and security of future releases." "Opt-in diagnostics for better experience, quality, and security of future releases."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": { "metadata": {
"tags": [ "tags": [
"Diagnostics" "Diagnostics"
] ]
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.telemetry import set_diagnostics_collection\n", "from azureml.telemetry import set_diagnostics_collection\n",
"\n", "\n",
"set_diagnostics_collection(send_diagnostics=True)" "set_diagnostics_collection(send_diagnostics=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Initialize workspace\n", "## Initialize workspace\n",
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`." "Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.workspace import Workspace\n", "from azureml.core.workspace import Workspace\n",
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n", "print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n", " 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n", " 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')" " 'Resource group: ' + ws.resource_group, sep='\\n')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Create or Attach existing AmlCompute\n", "## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n", "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
"\n", "\n",
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", "**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"\n", "\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n", "from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n", "from azureml.core.compute_target import ComputeTargetException\n",
"\n", "\n",
"# choose a name for your cluster\n", "# choose a name for your cluster\n",
"cluster_name = \"gpucluster\"\n", "cluster_name = \"gpucluster\"\n",
"\n", "\n",
"try:\n", "try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n", " compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target')\n", " print('Found existing compute target')\n",
"except ComputeTargetException:\n", "except ComputeTargetException:\n",
" print('Creating a new compute target...')\n", " print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n", " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
" max_nodes=4)\n", " max_nodes=4)\n",
"\n", "\n",
" # create the cluster\n", " # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n", "\n",
" compute_target.wait_for_completion(show_output=True)\n", " compute_target.wait_for_completion(show_output=True)\n",
"\n", "\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n", "# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())" "print(compute_target.get_status().serialize())"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`." "The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Upload data to datastore\n", "## Upload data to datastore\n",
"To make data accessible for remote training, AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data to Azure Storage, and interact with it from your remote compute targets. \n", "To make data accessible for remote training, AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data to Azure Storage, and interact with it from your remote compute targets. \n",
"\n", "\n",
"If your data is already stored in Azure, or you download the data as part of your training script, you will not need to do this step. For this tutorial, although you can download the data in your training script, we will demonstrate how to upload the training data to a datastore and access it during training to illustrate the datastore functionality." "If your data is already stored in Azure, or you download the data as part of your training script, you will not need to do this step. For this tutorial, although you can download the data in your training script, we will demonstrate how to upload the training data to a datastore and access it during training to illustrate the datastore functionality."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"First, download the training data from [here](http://mattmahoney.net/dc/text8.zip) to your local machine:" "First, download the training data from [here](http://mattmahoney.net/dc/text8.zip) to your local machine:"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import os\n", "import os\n",
"import urllib\n", "import urllib\n",
"\n", "\n",
"os.makedirs('./data', exist_ok=True)\n", "os.makedirs('./data', exist_ok=True)\n",
"download_url = 'http://mattmahoney.net/dc/text8.zip'\n", "download_url = 'http://mattmahoney.net/dc/text8.zip'\n",
"urllib.request.urlretrieve(download_url, filename='./data/text8.zip')" "urllib.request.urlretrieve(download_url, filename='./data/text8.zip')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Each workspace is associated with a default datastore. In this tutorial, we will upload the training data to this default datastore." "Each workspace is associated with a default datastore. In this tutorial, we will upload the training data to this default datastore."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"ds = ws.get_default_datastore()\n", "ds = ws.get_default_datastore()\n",
"print(ds.datastore_type, ds.account_name, ds.container_name)" "print(ds.datastore_type, ds.account_name, ds.container_name)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Upload the contents of the data directory to the path `./data` on the default datastore." "Upload the contents of the data directory to the path `./data` on the default datastore."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"ds.upload(src_dir='data', target_path='data', overwrite=True, show_progress=True)" "ds.upload(src_dir='data', target_path='data', overwrite=True, show_progress=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"For convenience, let's get a reference to the path on the datastore with the zip file of training data. We can do so using the `path` method. In the next section, we can then pass this reference to our training script's `--input_data` argument. " "For convenience, let's get a reference to the path on the datastore with the zip file of training data. We can do so using the `path` method. In the next section, we can then pass this reference to our training script's `--input_data` argument. "
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"path_on_datastore = 'data/text8.zip'\n", "path_on_datastore = 'data/text8.zip'\n",
"ds_data = ds.path(path_on_datastore)\n", "ds_data = ds.path(path_on_datastore)\n",
"print(ds_data)" "print(ds_data)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Train model on the remote compute" "## Train model on the remote compute"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create a project directory\n", "### Create a project directory\n",
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on." "Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import os\n", "project_folder = './tf-distr-hvd'\n",
"\n", "os.makedirs(project_folder, exist_ok=True)"
"project_folder = './tf-distr-hvd'\n", ]
"os.makedirs(project_folder, exist_ok=True)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "Copy the training script `tf_horovod_word2vec.py` into this project directory."
"source": [ ]
"Copy the training script `tf_horovod_word2vec.py` into this project directory." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "import shutil\n",
"source": [ "\n",
"import shutil\n", "shutil.copy('tf_horovod_word2vec.py', project_folder)"
"\n", ]
"shutil.copy('tf_horovod_word2vec.py', project_folder)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "### Create an experiment\n",
"source": [ "Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed TensorFlow tutorial. "
"### Create an experiment\n", ]
"Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed TensorFlow tutorial. " },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from azureml.core import Experiment\n",
"source": [ "\n",
"from azureml.core import Experiment\n", "experiment_name = 'tf-distr-hvd'\n",
"\n", "experiment = Experiment(ws, name=experiment_name)"
"experiment_name = 'tf-distr-hvd'\n", ]
"experiment = Experiment(ws, name=experiment_name)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "### Create a TensorFlow estimator\n",
"source": [ "The AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow)."
"### Create a TensorFlow estimator\n", ]
"The AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow)." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from azureml.train.dnn import TensorFlow\n",
"source": [ "\n",
"from azureml.train.dnn import TensorFlow\n", "script_params={\n",
"\n", " '--input_data': ds_data\n",
"script_params={\n", "}\n",
" '--input_data': ds_data\n", "\n",
"}\n", "estimator= TensorFlow(source_directory=project_folder,\n",
"\n", " compute_target=compute_target,\n",
"estimator= TensorFlow(source_directory=project_folder,\n", " script_params=script_params,\n",
" compute_target=compute_target,\n", " entry_script='tf_horovod_word2vec.py',\n",
" script_params=script_params,\n", " node_count=2,\n",
" entry_script='tf_horovod_word2vec.py',\n", " process_count_per_node=1,\n",
" node_count=2,\n", " distributed_backend='mpi',\n",
" process_count_per_node=1,\n", " use_gpu=True)"
" distributed_backend='mpi',\n", ]
" use_gpu=True)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI/Horovod, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, TensorFlow, Horovod and their dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `TensorFlow` constructor's `pip_packages` or `conda_packages` parameters.\n",
"source": [ "\n",
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI/Horovod, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, TensorFlow, Horovod and their dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `TensorFlow` constructor's `pip_packages` or `conda_packages` parameters.\n", "Note that we passed our training data reference `ds_data` to our script's `--input_data` argument. This will 1) mount our datastore on the remote compute and 2) provide the path to the data zip file on our datastore."
"\n", ]
"Note that we passed our training data reference `ds_data` to our script's `--input_data` argument. This will 1) mount our datastore on the remote compute and 2) provide the path to the data zip file on our datastore." },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "### Submit job\n",
"source": [ "Run your experiment by submitting your estimator object. Note that this call is asynchronous."
"### Submit job\n", ]
"Run your experiment by submitting your estimator object. Note that this call is asynchronous." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "run = experiment.submit(estimator)\n",
"source": [ "print(run)"
"run = experiment.submit(estimator)\n", ]
"print(run)" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "### Monitor your run\n",
"source": [ "You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes."
"### Monitor your run\n", ]
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from azureml.widgets import RunDetails\n",
"source": [ "RunDetails(run).show()"
"from azureml.widgets import RunDetails\n", ]
"RunDetails(run).show()" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "Alternatively, you can block until the script has completed training before running more code."
"source": [ ]
"Alternatively, you can block until the script has completed training before running more code." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "run.wait_for_completion(show_output=True)"
"source": [ ]
"run.wait_for_completion(show_output=True)" }
]
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"msauthor": "minxia"
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"msauthor": "minxia"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,317 +1,317 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Distributed TensorFlow with parameter server\n", "# Distributed TensorFlow with parameter server\n",
"In this tutorial, you will train a TensorFlow model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using native [distributed TensorFlow](https://www.tensorflow.org/deploy/distributed)." "In this tutorial, you will train a TensorFlow model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using native [distributed TensorFlow](https://www.tensorflow.org/deploy/distributed)."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites\n", "## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n", "* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n", "* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n", " * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)\n", " * create a workspace and its configuration file (`config.json`)\n",
"* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK" "* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Check core SDK version number\n", "# Check core SDK version number\n",
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"SDK version:\", azureml.core.VERSION)" "print(\"SDK version:\", azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Diagnostics\n", "## Diagnostics\n",
"Opt-in diagnostics for better experience, quality, and security of future releases." "Opt-in diagnostics for better experience, quality, and security of future releases."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": { "metadata": {
"tags": [ "tags": [
"Diagnostics" "Diagnostics"
] ]
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.telemetry import set_diagnostics_collection\n", "from azureml.telemetry import set_diagnostics_collection\n",
"\n", "\n",
"set_diagnostics_collection(send_diagnostics=True)" "set_diagnostics_collection(send_diagnostics=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Initialize workspace\n", "## Initialize workspace\n",
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`." "Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.workspace import Workspace\n", "from azureml.core.workspace import Workspace\n",
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n", "print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n", " 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n", " 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')" " 'Resource group: ' + ws.resource_group, sep = '\\n')"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Create or Attach existing AmlCompute\n", "## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n", "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
"\n", "\n",
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", "**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"\n", "\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n", "from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n", "from azureml.core.compute_target import ComputeTargetException\n",
"\n", "\n",
"# choose a name for your cluster\n", "# choose a name for your cluster\n",
"cluster_name = \"gpucluster\"\n", "cluster_name = \"gpucluster\"\n",
"\n", "\n",
"try:\n", "try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n", " compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target.')\n", " print('Found existing compute target.')\n",
"except ComputeTargetException:\n", "except ComputeTargetException:\n",
" print('Creating a new compute target...')\n", " print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n", " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
" max_nodes=4)\n", " max_nodes=4)\n",
"\n", "\n",
" # create the cluster\n", " # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n", "\n",
" compute_target.wait_for_completion(show_output=True)\n", " compute_target.wait_for_completion(show_output=True)\n",
"\n", "\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n", "# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())" "print(compute_target.get_status().serialize())"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Train model on the remote compute\n", "## Train model on the remote compute\n",
"Now that we have the cluster ready to go, let's run our distributed training job." "Now that we have the cluster ready to go, let's run our distributed training job."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create a project directory\n", "### Create a project directory\n",
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on." "Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import os\n", "import os\n",
"\n", "\n",
"project_folder = './tf-distr-ps'\n", "project_folder = './tf-distr-ps'\n",
"os.makedirs(project_folder, exist_ok=True)" "os.makedirs(project_folder, exist_ok=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copy the training script `tf_mnist_replica.py` into this project directory." "Copy the training script `tf_mnist_replica.py` into this project directory."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"import shutil\n", "import shutil\n",
"\n", "\n",
"shutil.copy('tf_mnist_replica.py', project_folder)" "shutil.copy('tf_mnist_replica.py', project_folder)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create an experiment\n", "### Create an experiment\n",
"Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed TensorFlow tutorial. " "Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed TensorFlow tutorial. "
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Experiment\n", "from azureml.core import Experiment\n",
"\n", "\n",
"experiment_name = 'tf-distr-ps'\n", "experiment_name = 'tf-distr-ps'\n",
"experiment = Experiment(ws, name=experiment_name)" "experiment = Experiment(ws, name=experiment_name)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Create a TensorFlow estimator\n", "### Create a TensorFlow estimator\n",
"The AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow)." "The AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow)."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.train.dnn import TensorFlow\n", "from azureml.train.dnn import TensorFlow\n",
"\n", "\n",
"script_params={\n", "script_params={\n",
" '--num_gpus': 1,\n", " '--num_gpus': 1,\n",
" '--train_steps': 500\n", " '--train_steps': 500\n",
"}\n", "}\n",
"\n", "\n",
"estimator = TensorFlow(source_directory=project_folder,\n", "estimator = TensorFlow(source_directory=project_folder,\n",
" compute_target=compute_target,\n", " compute_target=compute_target,\n",
" script_params=script_params,\n", " script_params=script_params,\n",
" entry_script='tf_mnist_replica.py',\n", " entry_script='tf_mnist_replica.py',\n",
" node_count=2,\n", " node_count=2,\n",
" worker_count=2,\n", " worker_count=2,\n",
" parameter_server_count=1, \n", " parameter_server_count=1, \n",
" distributed_backend='ps',\n", " distributed_backend='ps',\n",
" use_gpu=True)" " use_gpu=True)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"The above code specifies that we will run our training script on `2` nodes, with two workers and one parameter server. In order to execute a native distributed TensorFlow run, you must provide the argument `distributed_backend='ps'`. Using this estimator with these settings, TensorFlow and its dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `TensorFlow` constructor's `pip_packages` or `conda_packages` parameters." "The above code specifies that we will run our training script on `2` nodes, with two workers and one parameter server. In order to execute a native distributed TensorFlow run, you must provide the argument `distributed_backend='ps'`. Using this estimator with these settings, TensorFlow and its dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `TensorFlow` constructor's `pip_packages` or `conda_packages` parameters."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Submit job\n", "### Submit job\n",
"Run your experiment by submitting your estimator object. Note that this call is asynchronous." "Run your experiment by submitting your estimator object. Note that this call is asynchronous."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"run = experiment.submit(estimator)\n", "run = experiment.submit(estimator)\n",
"print(run)" "print(run)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"### Monitor your run\n", "### Monitor your run\n",
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes." "You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.widgets import RunDetails\n", "from azureml.widgets import RunDetails\n",
"\n", "\n",
"RunDetails(run).show()" "RunDetails(run).show()"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Alternatively, you can block until the script has completed training before running more code." "Alternatively, you can block until the script has completed training before running more code."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"run.wait_for_completion(show_output=True) # this provides a verbose log" "run.wait_for_completion(show_output=True) # this provides a verbose log"
] ]
} }
],
"metadata": {
"authors": [
{
"name": "minxia"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "minxia"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"msauthor": "minxia"
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"msauthor": "minxia"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,267 +1,265 @@
{ {
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n", "Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n", "\n",
"Licensed under the MIT License." "Licensed under the MIT License."
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Export Run History as Tensorboard logs\n", "# Export Run History as Tensorboard logs\n",
"\n", "\n",
"1. Run some training and log some metrics into Run History\n", "1. Run some training and log some metrics into Run History\n",
"2. Export the run history to some directory as Tensorboard logs\n", "2. Export the run history to some directory as Tensorboard logs\n",
"3. Launch a local Tensorboard to view the run history" "3. Launch a local Tensorboard to view the run history"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Prerequisites\n", "## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n", "* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n", "* Go through the [configuration notebook](../../../configuration.ipynb) notebook to:\n",
" * install the AML SDK\n", " * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)" " * create a workspace and its configuration file (`config.json`)"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Check core SDK version number\n", "# Check core SDK version number\n",
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"SDK version:\", azureml.core.VERSION)" "print(\"SDK version:\", azureml.core.VERSION)"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Install the Azure ML TensorBoard integration package if you haven't already." "Install the Azure ML TensorBoard integration package if you haven't already."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"!pip install azureml-contrib-tensorboard" "!pip install azureml-tensorboard"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Initialize Workspace\n", "## Initialize Workspace\n",
"\n", "\n",
"Initialize a workspace object from persisted configuration." "Initialize a workspace object from persisted configuration."
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from azureml.core import Workspace, Run, Experiment\n", "from azureml.core import Workspace, Experiment\n",
"\n", "\n",
"\n", "ws = Workspace.from_config()\n",
"ws = Workspace.from_config()\n", "print('Workspace name: ' + ws.name, \n",
"print('Workspace name: ' + ws.name, \n", " 'Azure region: ' + ws.location, \n",
" 'Azure region: ' + ws.location, \n", " 'Subscription id: ' + ws.subscription_id, \n",
" 'Subscription id: ' + ws.subscription_id, \n", " 'Resource group: ' + ws.resource_group, sep='\\n')"
" 'Resource group: ' + ws.resource_group, sep = '\\n')" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "## Set experiment name and start the run"
"## Set experiment name and start the run" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "experiment_name = 'export-to-tensorboard'\n",
"experiment_name = 'export-to-tensorboard'\n", "exp = Experiment(ws, experiment_name)\n",
"exp = Experiment(ws, experiment_name)\n", "root_run = exp.start_logging()"
"root_run = exp.start_logging()" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "# load diabetes dataset, a well-known built-in small dataset that comes with scikit-learn\n",
"# load diabetes dataset, a well-known built-in small dataset that comes with scikit-learn\n", "from sklearn.datasets import load_diabetes\n",
"from sklearn.datasets import load_diabetes\n", "from sklearn.linear_model import Ridge\n",
"from sklearn.linear_model import Ridge\n", "from sklearn.metrics import mean_squared_error\n",
"from sklearn.metrics import mean_squared_error\n", "from sklearn.model_selection import train_test_split\n",
"from sklearn.model_selection import train_test_split\n", "\n",
"\n", "X, y = load_diabetes(return_X_y=True)\n",
"X, y = load_diabetes(return_X_y=True)\n", "\n",
"\n", "columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n",
"columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n", "\n",
"\n", "x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\n",
"x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\n", "data = {\n",
"data = {\n", " \"train\":{\"x\":x_train, \"y\":y_train}, \n",
" \"train\":{\"x\":x_train, \"y\":y_train}, \n", " \"test\":{\"x\":x_test, \"y\":y_test}\n",
" \"test\":{\"x\":x_test, \"y\":y_test}\n", "}"
"}" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "# Example experiment\n",
"# Example experiment\n", "from tqdm import tqdm\n",
"from tqdm import tqdm\n", "\n",
"\n", "alphas = [.1, .2, .3, .4, .5, .6 , .7]\n",
"alphas = [.1, .2, .3, .4, .5, .6 , .7]\n", "\n",
"\n", "# try a bunch of alpha values in a Linear Regression (Ridge) model\n",
"# try a bunch of alpha values in a Linear Regression (Ridge) model\n", "for alpha in tqdm(alphas):\n",
"for alpha in tqdm(alphas):\n", " # create a bunch of child runs\n",
" # create a bunch of child runs\n", " with root_run.child_run(\"alpha\" + str(alpha)) as run:\n",
" with root_run.child_run(\"alpha\" + str(alpha)) as run:\n", " # More data science stuff\n",
" # More data science stuff\n", " reg = Ridge(alpha=alpha)\n",
" reg = Ridge(alpha=alpha)\n", " reg.fit(data[\"train\"][\"x\"], data[\"train\"][\"y\"])\n",
" reg.fit(data[\"train\"][\"x\"], data[\"train\"][\"y\"])\n", " \n",
" # TODO save model\n", " preds = reg.predict(data[\"test\"][\"x\"])\n",
" preds = reg.predict(data[\"test\"][\"x\"])\n", " mse = mean_squared_error(preds, data[\"test\"][\"y\"])\n",
" mse = mean_squared_error(preds, data[\"test\"][\"y\"])\n", " # End train and eval\n",
" # End train and eval\n", "\n",
"\n", " # log alpha, mean_squared_error and feature names in run history\n",
" # log alpha, mean_squared_error and feature names in run history\n", " root_run.log(\"alpha\", alpha)\n",
" root_run.log(\"alpha\", alpha)\n", " root_run.log(\"mse\", mse)"
" root_run.log(\"mse\", mse)" ]
] },
}, {
{ "cell_type": "markdown",
"cell_type": "markdown", "metadata": {},
"metadata": {}, "source": [
"source": [ "## Export Run History to Tensorboard logs"
"## Export Run History to Tensorboard logs" ]
] },
}, {
{ "cell_type": "code",
"cell_type": "code", "execution_count": null,
"execution_count": null, "metadata": {},
"metadata": {}, "outputs": [],
"outputs": [], "source": [
"source": [ "# Export Run History to Tensorboard logs\n",
"# Export Run History to Tensorboard logs\n", "from azureml.tensorboard.export import export_to_tensorboard\n",
"from azureml.contrib.tensorboard.export import export_to_tensorboard\n", "import os\n",
"import os\n", "\n",
"import tensorflow as tf\n", "logdir = 'exportedTBlogs'\n",
"\n", "log_path = os.path.join(os.getcwd(), logdir)\n",
"logdir = 'exportedTBlogs'\n", "try:\n",
"log_path = os.path.join(os.getcwd(), logdir)\n", " os.stat(log_path)\n",
"try:\n", "except os.error:\n",
" os.stat(log_path)\n", " os.mkdir(log_path)\n",
"except os.error:\n", "print(logdir)\n",
" os.mkdir(log_path)\n", "\n",
"print(logdir)\n", "# export run history for the project\n",
"\n", "export_to_tensorboard(root_run, logdir)\n",
"# export run history for the project\n", "\n",
"export_to_tensorboard(root_run, logdir)\n", "# or export a particular run\n",
"\n", "# export_to_tensorboard(run, logdir)"
"# or export a particular run\n", ]
"# export_to_tensorboard(run, logdir)" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "root_run.complete()"
"source": [ ]
"root_run.complete()" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## Start Tensorboard\n",
"source": [ "\n",
"## Start Tensorboard\n", "Or you can start the Tensorboard outside this notebook to view the result"
"\n", ]
"Or you can start the Tensorboard outside this notebook to view the result" },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "from azureml.tensorboard import Tensorboard\n",
"source": [ "\n",
"from azureml.contrib.tensorboard import Tensorboard\n", "# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here\n",
"\n", "tb = Tensorboard([], local_root=logdir, port=6006)\n",
"# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here\n", "\n",
"tb = Tensorboard([], local_root=logdir, port=6006)\n", "# If successful, start() returns a string with the URI of the instance.\n",
"\n", "tb.start()"
"# If successful, start() returns a string with the URI of the instance.\n", ]
"tb.start()" },
] {
}, "cell_type": "markdown",
{ "metadata": {},
"cell_type": "markdown", "source": [
"metadata": {}, "## Stop Tensorboard\n",
"source": [ "\n",
"## Stop Tensorboard\n", "When you're done, make sure to call the `stop()` method of the Tensorboard object."
"\n", ]
"When you're done, make sure to call the `stop()` method of the Tensorboard object." },
] {
}, "cell_type": "code",
{ "execution_count": null,
"cell_type": "code", "metadata": {},
"execution_count": null, "outputs": [],
"metadata": {}, "source": [
"outputs": [], "tb.stop()"
"source": [ ]
"tb.stop()" }
]
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
], ],
"kernelspec": { "metadata": {
"display_name": "Python 3.6", "authors": [
"language": "python", {
"name": "python36" "name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
}, },
"language_info": { "nbformat": 4,
"codemirror_mode": { "nbformat_minor": 2
"name": "ipython", }
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,16 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
print("*********************************************************")
print("Hello Azure ML!")
try:
from azureml.core import Run
run = Run.get_context()
print("Log Fibonacci numbers.")
run.log_list('Fibonacci numbers', [0, 1, 1, 2, 3, 5, 8, 13, 21, 34])
run.complete()
except:
print("Warning: you need to install Azure ML SDK in order to log metrics.")
print("*********************************************************")

View File

@@ -0,0 +1,363 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {
"nbpresent": {
"id": "bf74d2e9-2708-49b1-934b-e0ede342f475"
}
},
"source": [
"# How to use Estimator in Azure ML\n",
"\n",
"## Introduction\n",
"This tutorial shows how to use the Estimator pattern in Azure Machine Learning SDK. Estimator is a convenient object in Azure Machine Learning that wraps run configuration information to help simplify the tasks of specifying how a script is executed.\n",
"\n",
"\n",
"## Prerequisite:\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's get started. First let's import some Python libraries."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"nbpresent": {
"id": "edaa7f2f-2439-4148-b57a-8c794c0945ec"
}
},
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace\n",
"\n",
"# check core SDK version number\n",
"print(\"Azure ML SDK Version: \", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize workspace\n",
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {
"nbpresent": {
"id": "59f52294-4a25-4c92-bab8-3b07f0f44d15"
}
},
"source": [
"## Create an Azure ML experiment\n",
"Let's create an experiment named \"estimator-test\". The script runs will be recorded under this experiment in Azure."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"nbpresent": {
"id": "bc70f780-c240-4779-96f3-bc5ef9a37d59"
}
},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"\n",
"exp = Experiment(workspace=ws, name='estimator-test')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we could not find the cluster with the given name, then we will create a new cluster here. We will create an `AmlCompute` cluster of `STANDARD_NC6` GPU VMs. This process is broken down into 3 steps:\n",
"1. create the configuration (this step is local and only takes a second)\n",
"2. create the cluster (this step will take about **20 seconds**)\n",
"3. provision the VMs to bring the cluster to the initial size (of 1 in this case). This step will take about **3-5 minutes** and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# choose a name for your cluster\n",
"cluster_name = \"cpucluster\"\n",
"\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target')\n",
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', max_nodes=4)\n",
"\n",
" # create the cluster\n",
" cpu_cluster = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it uses the scale settings for the cluster\n",
" cpu_cluster.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(cpu_cluster.get_status().serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that you have created the compute target, let's see what the workspace's `compute_targets` property returns. You should now see one entry named 'cpucluster' of type `AmlCompute`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"compute_targets = ws.compute_targets\n",
"for name, ct in compute_targets.items():\n",
" print(name, ct.type, ct.provisioning_state)"
]
},
{
"cell_type": "markdown",
"metadata": {
"nbpresent": {
"id": "2039d2d5-aca6-4f25-a12f-df9ae6529cae"
}
},
"source": [
"## Use a simple script\n",
"We have already created a simple \"hello world\" script. This is the script that we will submit through the estimator pattern. It prints a hello-world message, and if Azure ML SDK is installed, it will also logs an array of values ([Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number))."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./dummy_train.py', 'r') as f:\n",
" print(f.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create A Generic Estimator"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we import the Estimator class and also a widget to visualize a run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.estimator import Estimator\n",
"from azureml.widgets import RunDetails"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The simplest estimator is to submit the current folder to the local computer. Estimator by default will attempt to use Docker-based execution. Let's turn that off for now. It then builds a conda environment locally, installs Azure ML SDK in it, and runs your script."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# use a conda environment, don't use Docker, on local computer\n",
"est = Estimator(source_directory='.', compute_target='local', entry_script='dummy_train.py', use_docker=False)\n",
"run = exp.submit(est)\n",
"RunDetails(run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also enable Docker and let estimator pick the default CPU image supplied by Azure ML for execution. You can target an AmlCompute cluster (or any other supported compute target types)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# use a conda environment on default Docker image in an AmlCompute cluster\n",
"est = Estimator(source_directory='.', compute_target=cpu_cluster, entry_script='dummy_train.py', use_docker=True)\n",
"run = exp.submit(est)\n",
"RunDetails(run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can customize the conda environment by adding conda and/or pip packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# add a conda package\n",
"est = Estimator(source_directory='.', \n",
" compute_target='local', \n",
" entry_script='dummy_train.py', \n",
" use_docker=False, \n",
" conda_packages=['scikit-learn'])\n",
"run = exp.submit(est)\n",
"RunDetails(run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also specify a custom Docker image for exeution. In this case, you probably want to tell the system not to build a new conda environment for you. Instead, you can specify the path to an existing Python environment in the custom Docker image.\n",
"\n",
"**Note**: since the below example points to the preinstalled Python environment in the miniconda3 image maintained by continuum.io on Docker Hub where Azure ML SDK is not present, the logging metric code is not triggered. But a run history record is still recorded. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# use a custom Docker image\n",
"from azureml.core.runconfig import ContainerRegistry\n",
"\n",
"# this is an image available in Docker Hub\n",
"image_name = 'continuumio/miniconda3'\n",
"\n",
"# you can also point to an image in a private ACR\n",
"image_registry_details = ContainerRegistry()\n",
"image_registry_details.address = \"myregistry.azurecr.io\"\n",
"image_registry_details.username = \"username\"\n",
"image_registry_details.password = \"password\"\n",
"\n",
"# don't let the system build a new conda environment\n",
"user_managed_dependencies = True\n",
"\n",
"# submit to a local Docker container. if you don't have Docker engine running locally, you can set compute_target to cpu_cluster.\n",
"est = Estimator(source_directory='.', compute_target='local', \n",
" entry_script='dummy_train.py',\n",
" custom_docker_image=image_name,\n",
" image_registry_details=image_registry_details,\n",
" user_managed=user_managed_dependencies\n",
" )\n",
"\n",
"run = exp.submit(est)\n",
"RunDetails(run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next Steps\n",
"Now you can proceed to explore the other types of estimators, such as TensorFlow estimator, PyTorch estimator, etc. in the sample folder."
]
}
],
"metadata": {
"authors": [
{
"name": "minxia"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
},
"msauthor": "haining"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,136 @@
import argparse
import numpy as np
import chainer
from chainer import backend
from chainer import backends
from chainer.backends import cuda
from chainer import Function, gradient_check, report, training, utils, Variable
from chainer import datasets, iterators, optimizers, serializers
from chainer import Link, Chain, ChainList
import chainer.functions as F
import chainer.links as L
from chainer.training import extensions
from chainer.dataset import concat_examples
from chainer.backends.cuda import to_cpu
from azureml.core.run import Run
run = Run.get_context()
class MyNetwork(Chain):
def __init__(self, n_mid_units=100, n_out=10):
super(MyNetwork, self).__init__()
with self.init_scope():
self.l1 = L.Linear(None, n_mid_units)
self.l2 = L.Linear(n_mid_units, n_mid_units)
self.l3 = L.Linear(n_mid_units, n_out)
def forward(self, x):
h = F.relu(self.l1(x))
h = F.relu(self.l2(h))
return self.l3(h)
def main():
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--epochs', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--output_dir', '-o', default='./outputs',
help='Directory to output the result')
parser.add_argument('--gpu_id', '-g', default=0,
help='ID of the GPU to be used. Set to -1 if you use CPU')
args = parser.parse_args()
# Download the MNIST data if you haven't downloaded it yet
train, test = datasets.mnist.get_mnist(withlabel=True, ndim=1)
gpu_id = args.gpu_id
batchsize = args.batchsize
epochs = args.epochs
run.log('Batch size', np.int(batchsize))
run.log('Epochs', np.int(epochs))
train_iter = iterators.SerialIterator(train, batchsize)
test_iter = iterators.SerialIterator(test, batchsize,
repeat=False, shuffle=False)
model = MyNetwork()
if gpu_id >= 0:
# Make a specified GPU current
chainer.backends.cuda.get_device_from_id(0).use()
model.to_gpu() # Copy the model to the GPU
# Choose an optimizer algorithm
optimizer = optimizers.MomentumSGD(lr=0.01, momentum=0.9)
# Give the optimizer a reference to the model so that it
# can locate the model's parameters.
optimizer.setup(model)
while train_iter.epoch < epochs:
# ---------- One iteration of the training loop ----------
train_batch = train_iter.next()
image_train, target_train = concat_examples(train_batch, gpu_id)
# Calculate the prediction of the network
prediction_train = model(image_train)
# Calculate the loss with softmax_cross_entropy
loss = F.softmax_cross_entropy(prediction_train, target_train)
# Calculate the gradients in the network
model.cleargrads()
loss.backward()
# Update all the trainable parameters
optimizer.update()
# --------------------- until here ---------------------
# Check the validation accuracy of prediction after every epoch
if train_iter.is_new_epoch: # If this iteration is the final iteration of the current epoch
# Display the training loss
print('epoch:{:02d} train_loss:{:.04f} '.format(
train_iter.epoch, float(to_cpu(loss.array))), end='')
test_losses = []
test_accuracies = []
while True:
test_batch = test_iter.next()
image_test, target_test = concat_examples(test_batch, gpu_id)
# Forward the test data
prediction_test = model(image_test)
# Calculate the loss
loss_test = F.softmax_cross_entropy(prediction_test, target_test)
test_losses.append(to_cpu(loss_test.array))
# Calculate the accuracy
accuracy = F.accuracy(prediction_test, target_test)
accuracy.to_cpu()
test_accuracies.append(accuracy.array)
if test_iter.is_new_epoch:
test_iter.epoch = 0
test_iter.current_position = 0
test_iter.is_new_epoch = False
test_iter._pushed_position = None
break
val_accuracy = np.mean(test_accuracies)
print('val_loss:{:.04f} val_accuracy:{:.04f}'.format(
np.mean(test_losses), val_accuracy))
run.log("Accuracy", np.float(val_accuracy))
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,134 @@
import argparse
import numpy as np
import chainer
from chainer import backend
from chainer import backends
from chainer.backends import cuda
from chainer import Function, gradient_check, report, training, utils, Variable
from chainer import datasets, iterators, optimizers, serializers
from chainer import Link, Chain, ChainList
import chainer.functions as F
import chainer.links as L
from chainer.training import extensions
from chainer.dataset import concat_examples
from chainer.backends.cuda import to_cpu
from azureml.core.run import Run
run = Run.get_context()
class MyNetwork(Chain):
def __init__(self, n_mid_units=100, n_out=10):
super(MyNetwork, self).__init__()
with self.init_scope():
self.l1 = L.Linear(None, n_mid_units)
self.l2 = L.Linear(n_mid_units, n_mid_units)
self.l3 = L.Linear(n_mid_units, n_out)
def forward(self, x):
h = F.relu(self.l1(x))
h = F.relu(self.l2(h))
return self.l3(h)
def main():
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--epochs', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--output_dir', '-o', default='./outputs',
help='Directory to output the result')
args = parser.parse_args()
# Download the MNIST data if you haven't downloaded it yet
train, test = datasets.mnist.get_mnist(withlabel=True, ndim=1)
batchsize = args.batchsize
epochs = args.epochs
run.log('Batch size', np.int(batchsize))
run.log('Epochs', np.int(epochs))
train_iter = iterators.SerialIterator(train, batchsize)
test_iter = iterators.SerialIterator(test, batchsize,
repeat=False, shuffle=False)
model = MyNetwork()
gpu_id = -1 # Set to -1 if you use CPU
if gpu_id >= 0:
# Make a specified GPU current
chainer.backends.cuda.get_device_from_id(0).use()
model.to_gpu() # Copy the model to the GPU
# Choose an optimizer algorithm
optimizer = optimizers.MomentumSGD(lr=0.01, momentum=0.9)
# Give the optimizer a reference to the model so that it
# can locate the model's parameters.
optimizer.setup(model)
while train_iter.epoch < epochs:
# ---------- One iteration of the training loop ----------
train_batch = train_iter.next()
image_train, target_train = concat_examples(train_batch, gpu_id)
# Calculate the prediction of the network
prediction_train = model(image_train)
# Calculate the loss with softmax_cross_entropy
loss = F.softmax_cross_entropy(prediction_train, target_train)
# Calculate the gradients in the network
model.cleargrads()
loss.backward()
# Update all the trainable parameters
optimizer.update()
# --------------------- until here ---------------------
# Check the validation accuracy of prediction after every epoch
if train_iter.is_new_epoch: # If this iteration is the final iteration of the current epoch
# Display the training loss
print('epoch:{:02d} train_loss:{:.04f} '.format(
train_iter.epoch, float(to_cpu(loss.array))), end='')
test_losses = []
test_accuracies = []
while True:
test_batch = test_iter.next()
image_test, target_test = concat_examples(test_batch, gpu_id)
# Forward the test data
prediction_test = model(image_test)
# Calculate the loss
loss_test = F.softmax_cross_entropy(prediction_test, target_test)
test_losses.append(to_cpu(loss_test.array))
# Calculate the accuracy
accuracy = F.accuracy(prediction_test, target_test)
accuracy.to_cpu()
test_accuracies.append(accuracy.array)
if test_iter.is_new_epoch:
test_iter.epoch = 0
test_iter.current_position = 0
test_iter.is_new_epoch = False
test_iter._pushed_position = None
break
val_accuracy = np.mean(test_accuracies)
print('val_loss:{:.04f} val_accuracy:{:.04f}'.format(
np.mean(test_losses), val_accuracy))
run.log("Accuracy", np.float(val_accuracy))
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,425 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Train and hyperparameter tune with Chainer\n",
"\n",
"In this tutorial, we demonstrate how to use the Azure ML Python SDK to train a Convolutional Neural Network (CNN) on a single-node GPU with Chainer to perform handwritten digit recognition on the popular MNIST dataset. We will also demonstrate how to perform hyperparameter tuning of the model using Azure ML's HyperDrive service."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"Diagnostics"
]
},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize workspace\n",
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource.\n",
"\n",
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace, this code will skip the creation process.\n",
"\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# choose a name for your cluster\n",
"cluster_name = \"gpucluster\"\n",
"\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target.')\n",
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
" max_nodes=4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train model on the remote compute\n",
"Now that you have your data and training script prepared, you are ready to train on your remote compute cluster. You can take advantage of Azure compute to leverage GPUs to cut down your training time. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a project directory\n",
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"project_folder = './chainer-mnist'\n",
"os.makedirs(project_folder, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare training script\n",
"Now you will need to create your training script. In this tutorial, the training script is already provided for you at `chainer_mnist.py`. In practice, you should be able to take any custom training script as is and run it with Azure ML without having to modify your code.\n",
"\n",
"However, if you would like to use Azure ML's [tracking and metrics](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#metrics) capabilities, you will have to add a small amount of Azure ML code inside your training script. \n",
"\n",
"In `chainer_mnist.py`, we will log some metrics to our Azure ML run. To do so, we will access the Azure ML `Run` object within the script:\n",
"```Python\n",
"from azureml.core.run import Run\n",
"run = Run.get_context()\n",
"```\n",
"Further within `chainer_mnist.py`, we log the batchsize and epochs parameters, and the highest accuracy the model achieves:\n",
"```Python\n",
"run.log('Batch size', np.int(args.batchsize))\n",
"run.log('Epochs', np.int(args.epochs))\n",
"\n",
"run.log('Accuracy', np.float(val_accuracy))\n",
"```\n",
"These run metrics will become particularly important when we begin hyperparameter tuning our model in the \"Tune model hyperparameters\" section."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once your script is ready, copy the training script `chainer_mnist.py` into your project directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import shutil\n",
"\n",
"shutil.copy('chainer_mnist.py', project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create an experiment\n",
"Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this Chainer tutorial. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"\n",
"experiment_name = 'chainer-mnist'\n",
"experiment = Experiment(ws, name=experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a Chainer estimator\n",
"The Azure ML SDK's Chainer estimator enables you to easily submit Chainer training jobs for both single-node and distributed runs. The following code will define a single-node Chainer job."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.dnn import Chainer\n",
"\n",
"script_params = {\n",
" '--epochs': 10,\n",
" '--batchsize': 128,\n",
" '--output_dir': './outputs'\n",
"}\n",
"\n",
"estimator = Chainer(source_directory=project_folder, \n",
" script_params=script_params,\n",
" compute_target=compute_target,\n",
" pip_packages=['numpy', 'pytest'],\n",
" entry_script='chainer_mnist.py',\n",
" use_gpu=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `script_params` parameter is a dictionary containing the command-line arguments to your training script `entry_script`. To leverage the Azure VM's GPU for training, we set `use_gpu=True`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit job\n",
"Run your experiment by submitting your estimator object. Note that this call is asynchronous."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run = experiment.submit(estimator)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Monitor your run\n",
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# to get more details of your run\n",
"print(run.get_details())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tune model hyperparameters\n",
"Now that we've seen how to do a simple Chainer training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Start a hyperparameter sweep\n",
"First, we will define the hyperparameter space to sweep over. Let's tune the batch size and epochs parameters. In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, accuracy.\n",
"\n",
"Then, we specify the early termination policy to use to early terminate poorly performing runs. Here we use the `BanditPolicy`, which will terminate any run that doesn't fall within the slack factor of our primary evaluation metric. In this tutorial, we will apply this policy every epoch (since we report our `Accuracy` metric every epoch and `evaluation_interval=1`). Notice we will delay the first policy evaluation until after the first `3` epochs (`delay_evaluation=3`).\n",
"Refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-tune-hyperparameters#specify-an-early-termination-policy) for more information on the BanditPolicy and other policies available."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.hyperdrive.runconfig import HyperDriveRunConfig\n",
"from azureml.train.hyperdrive.sampling import RandomParameterSampling\n",
"from azureml.train.hyperdrive.policy import BanditPolicy\n",
"from azureml.train.hyperdrive.run import PrimaryMetricGoal\n",
"from azureml.train.hyperdrive.parameter_expressions import choice\n",
" \n",
"\n",
"param_sampling = RandomParameterSampling( {\n",
" \"--batchsize\": choice(128, 256),\n",
" \"--epochs\": choice(5, 10, 20, 40)\n",
" }\n",
")\n",
"\n",
"hyperdrive_run_config = HyperDriveRunConfig(estimator=estimator,\n",
" hyperparameter_sampling=param_sampling, \n",
" primary_metric_name='Accuracy',\n",
" primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,\n",
" max_total_runs=8,\n",
" max_concurrent_runs=4)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, lauch the hyperparameter tuning job."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# start the HyperDrive run\n",
"hyperdrive_run = experiment.submit(hyperdrive_run_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Monitor HyperDrive runs\n",
"You can monitor the progress of the runs with the following Jupyter widget. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"RunDetails(hyperdrive_run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output=True)"
]
}
],
"metadata": {
"authors": [
{
"name": "minxia"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"msauthor": "minxia"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,123 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import numpy as np
import argparse
import os
import matplotlib.pyplot as plt
import keras
from keras.models import Sequential, model_from_json
from keras.layers import Dense
from keras.optimizers import RMSprop
from keras.callbacks import Callback
import tensorflow as tf
from azureml.core import Run
from utils import load_data, one_hot_encode
print("Keras version:", keras.__version__)
print("Tensorflow version:", tf.__version__)
parser = argparse.ArgumentParser()
parser.add_argument('--data-folder', type=str, dest='data_folder', help='data folder mounting point')
parser.add_argument('--batch-size', type=int, dest='batch_size', default=50, help='mini batch size for training')
parser.add_argument('--first-layer-neurons', type=int, dest='n_hidden_1', default=100,
help='# of neurons in the first layer')
parser.add_argument('--second-layer-neurons', type=int, dest='n_hidden_2', default=100,
help='# of neurons in the second layer')
parser.add_argument('--learning-rate', type=float, dest='learning_rate', default=0.001, help='learning rate')
args = parser.parse_args()
data_folder = args.data_folder
print('training dataset is stored here:', data_folder)
X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0
X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0
y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)
y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)
training_set_size = X_train.shape[0]
n_inputs = 28 * 28
n_h1 = args.n_hidden_1
n_h2 = args.n_hidden_2
n_outputs = 10
n_epochs = 20
batch_size = args.batch_size
learning_rate = args.learning_rate
y_train = one_hot_encode(y_train, n_outputs)
y_test = one_hot_encode(y_test, n_outputs)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape, sep='\n')
# Build a simple MLP model
model = Sequential()
# first hidden layer
model.add(Dense(n_h1, activation='relu', input_shape=(n_inputs,)))
# second hidden layer
model.add(Dense(n_h2, activation='relu'))
# output layer
model.add(Dense(n_outputs, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(lr=learning_rate),
metrics=['accuracy'])
# start an Azure ML run
run = Run.get_context()
class LogRunMetrics(Callback):
# callback at the end of every epoch
def on_epoch_end(self, epoch, log):
# log a value repeated which creates a list
run.log('Loss', log['loss'])
run.log('Accuracy', log['acc'])
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=n_epochs,
verbose=2,
validation_data=(X_test, y_test),
callbacks=[LogRunMetrics()])
score = model.evaluate(X_test, y_test, verbose=0)
# log a single value
run.log("Final test loss", score[0])
print('Test loss:', score[0])
run.log('Final test accuracy', score[1])
print('Test accuracy:', score[1])
plt.figure(figsize=(6, 3))
plt.title('MNIST with Keras MLP ({} epochs)'.format(n_epochs), fontsize=14)
plt.plot(history.history['acc'], 'b-', label='Accuracy', lw=4, alpha=0.5)
plt.plot(history.history['loss'], 'r--', label='Loss', lw=4, alpha=0.5)
plt.legend(fontsize=12)
plt.grid(True)
# log an image
run.log_image('Accuracy vs Loss', plot=plt)
# create a ./outputs/model folder in the compute target
# files saved in the "./outputs" folder are automatically uploaded into run history
os.makedirs('./outputs/model', exist_ok=True)
# serialize NN architecture to JSON
model_json = model.to_json()
# save model JSON
with open('./outputs/model/model.json', 'w') as f:
f.write(model_json)
# save model weights
model.save_weights('./outputs/model/model.h5')
print("model saved in ./outputs/model folder")

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

View File

@@ -0,0 +1,27 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import gzip
import numpy as np
import struct
# load compressed MNIST gz files and return numpy arrays
def load_data(filename, label=False):
with gzip.open(filename) as gz:
struct.unpack('I', gz.read(4))
n_items = struct.unpack('>I', gz.read(4))
if not label:
n_rows = struct.unpack('>I', gz.read(4))[0]
n_cols = struct.unpack('>I', gz.read(4))[0]
res = np.frombuffer(gz.read(n_items[0] * n_rows * n_cols), dtype=np.uint8)
res = res.reshape(n_items[0], n_rows * n_cols)
else:
res = np.frombuffer(gz.read(n_items[0]), dtype=np.uint8)
res = res.reshape(n_items[0], 1)
return res
# one-hot encode a 1-D array
def one_hot_encode(array, num_of_classes):
return np.eye(num_of_classes)[array.reshape(-1)]

Some files were not shown because too many files have changed in this diff Show More