Compare commits

...

11 Commits

Author SHA1 Message Date
Cody
12905ef254 Update README.md 2020-11-02 06:59:44 -08:00
Harneet Virk
4cf56eee91 Merge pull request #1217 from Azure/release_update/Release-74
update samples from Release-74 as a part of  SDK release
2020-10-30 17:27:02 -07:00
amlrelsa-ms
d345ff6c37 update samples from Release-74 as a part of SDK release 2020-10-30 22:20:10 +00:00
Harneet Virk
560dcac0a0 Merge pull request #1214 from Azure/release_update/Release-73
update samples from Release-73 as a part of  SDK release
2020-10-29 23:38:02 -07:00
amlrelsa-ms
322087a58c update samples from Release-73 as a part of SDK release 2020-10-30 06:37:05 +00:00
Harneet Virk
e255c000ab Merge pull request #1211 from Azure/release_update/Release-72
update samples from Release-72 as a part of  SDK release
2020-10-28 14:30:50 -07:00
amlrelsa-ms
7871e37ec0 update samples from Release-72 as a part of SDK release 2020-10-28 21:24:40 +00:00
Cody
58e584e7eb Update README.md (#1209) 2020-10-27 21:00:38 -04:00
Harneet Virk
1b0d75cb45 Merge pull request #1206 from Azure/release_update/Release-71
update samples from Release-71 as a part of  SDK 1.17.0 release
2020-10-26 22:29:48 -07:00
amlrelsa-ms
5c38272fb4 update samples from Release-71 as a part of SDK release 2020-10-27 04:11:39 +00:00
Harneet Virk
e026c56f19 Merge pull request #1200 from Azure/cody/add-new-repo-link
update readme
2020-10-22 10:50:03 -07:00
36 changed files with 1456 additions and 150 deletions

View File

@@ -20,10 +20,10 @@ This [index](./index.md) should assist in navigating the Azure Machine Learning
If you want to... If you want to...
* ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb). * ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb).
* ...learn about experimentation and tracking run history, try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb). * ...learn about experimentation and tracking run history, first [train within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) and [using logging APIs](./how-to-use-azureml/training/logging-api/logging-api.ipynb).
* ...train deep learning models at scale, learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb) * ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
* ...deploy models as a realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb). * ...deploy models as a realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
* ...deploy models as a batch scoring service, [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring). * ...deploy models as a batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring).
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb). * ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb).
## Tutorials ## Tutorials
@@ -35,11 +35,13 @@ The [Tutorials](./tutorials) folder contains notebooks for the tutorials describ
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets - [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
- [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
- [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways. - [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
- [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models - [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring - [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions - [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks - [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
- [Monitor Models](./how-to-use-azureml/monitor-models) - Examples showing how to enable model monitoring services such as DataDrift
- [Reinforcement Learning](./how-to-use-azureml/reinforcement-learning) - Examples showing how to train reinforcement learning agents - [Reinforcement Learning](./how-to-use-azureml/reinforcement-learning) - Examples showing how to train reinforcement learning agents
--- ---
@@ -58,7 +60,7 @@ Visit this [community repository](https://github.com/microsoft/MLOps/tree/master
## Projects using Azure Machine Learning ## Projects using Azure Machine Learning
Visit following repos to see projects contributed by Azure ML users: Visit following repos to see projects contributed by Azure ML users:
- [AML Examples](https://github.com/Azure/azureml-examples) - [AMLSamples](https://github.com/Azure/AMLSamples) Number of end-to-end examples, including face recognition, predictive maintenance, customer churn and sentiment analysis.
- [Learn about Natural Language Processing best practices using Azure Machine Learning service](https://github.com/microsoft/nlp) - [Learn about Natural Language Processing best practices using Azure Machine Learning service](https://github.com/microsoft/nlp)
- [Pre-Train BERT models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT) - [Pre-Train BERT models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion) - [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)

View File

@@ -103,7 +103,7 @@
"source": [ "source": [
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -4,6 +4,7 @@ Learn how to use Azure Machine Learning services for experimentation and model m
As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order. As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order.
* [train-within-notebook](./training/train-within-notebook): Train a model while tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
* [train-on-local](./training/train-on-local): Learn how to submit a run to local computer and use Azure ML managed run configuration. * [train-on-local](./training/train-on-local): Learn how to submit a run to local computer and use Azure ML managed run configuration.
* [train-on-amlcompute](./training/train-on-amlcompute): Use a 1-n node Azure ML managed compute cluster for remote runs on Azure CPU or GPU infrastructure. * [train-on-amlcompute](./training/train-on-amlcompute): Use a 1-n node Azure ML managed compute cluster for remote runs on Azure CPU or GPU infrastructure.
* [train-on-remote-vm](./training/train-on-remote-vm): Use Data Science Virtual Machine as a target for remote runs. * [train-on-remote-vm](./training/train-on-remote-vm): Use Data Science Virtual Machine as a target for remote runs.

View File

@@ -97,62 +97,96 @@ jupyter notebook
<a name="databricks"></a> <a name="databricks"></a>
## Setup using Azure Databricks ## Setup using Azure Databricks
**NOTE**: Please create your Azure Databricks cluster as v6.0 (high concurrency preferred) with **Python 3** (dropdown). **NOTE**: Please create your Azure Databricks cluster as v7.1 (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook. **NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl]** as a PyPi library in Azure Databricks workspace. - You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl).
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks). - Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl) and import into the Azure databricks workspace.
- Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace.
- Attach the notebook to the cluster. - Attach the notebook to the cluster.
<a name="samples"></a> <a name="samples"></a>
# Automated ML SDK Sample Notebooks # Automated ML SDK Sample Notebooks
- [auto-ml-classification-credit-card-fraud.ipynb](classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb) ## Classification
- Dataset: Kaggle's [credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud) - **Classify Credit Card Fraud**
- Simple example of using automated ML for classification to fraudulent credit card transactions - Dataset: [Kaggle's credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
- Uses azure compute for training - **[Jupyter Notebook (remote run)](classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb)**
- run the experiment remotely on AML Compute cluster
- test the performance of the best model in the local environment
- **[Jupyter Notebook (local run)](local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb)**
- run experiment in the local environment
- use Mimic Explainer for computing feature importance
- deploy the best model along with the explainer to an Azure Kubernetes (AKS) cluster, which will compute the raw and engineered feature importances at inference time
- **Predict Term Deposit Subscriptions in a Bank**
- Dataset: [UCI's bank marketing dataset](https://www.kaggle.com/janiobachmann/bank-marketing-dataset)
- **[Jupyter Notebook](classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb)**
- run experiment remotely on AML Compute cluster to generate ONNX compatible models
- view the featurization steps that were applied during training
- view feature importance for the best model
- download the best model in ONNX format and use it for inferencing using ONNXRuntime
- deploy the best model in PKL format to Azure Container Instance (ACI)
- **Predict Newsgroup based on Text from News Article**
- Dataset: [20 newsgroups text dataset](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html)
- **[Jupyter Notebook](classification-text-dnn/auto-ml-classification-text-dnn.ipynb)**
- AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data
- AutoML will use Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used
- Bidirectional Long-Short Term neural network (BiLSTM) will be utilized when a CPU compute is used, thereby optimizing the choice of DNN
- [auto-ml-regression.ipynb](regression/auto-ml-regression.ipynb) ## Regression
- **Predict Performance of Hardware Parts**
- Dataset: Hardware Performance Dataset - Dataset: Hardware Performance Dataset
- Simple example of using automated ML for regression - **[Jupyter Notebook](regression/auto-ml-regression.ipynb)**
- Uses azure compute for training - run the experiment remotely on AML Compute cluster
- get best trained model for a different metric than the one the experiment was optimized for
- test the performance of the best model in the local environment
- **[Jupyter Notebook (advanced)](regression/auto-ml-regression.ipynb)**
- run the experiment remotely on AML Compute cluster
- customize featurization: override column purpose within the dataset, configure transformer parameters
- get best trained model for a different metric than the one the experiment was optimized for
- run a model explanation experiment on the remote cluster
- deploy the model along the explainer and run online inferencing
- [auto-ml-regression-explanation-featurization.ipynb](regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb) ## Time Series Forecasting
- Dataset: Hardware Performance Dataset - **Forecast Energy Demand**
- Shows featurization and excplanation - Dataset: [NYC energy demand data](http://mis.nyiso.com/public/P-58Blist.htm)
- Uses azure compute for training - **[Jupyter Notebook](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)**
- run experiment remotely on AML Compute cluster
- [auto-ml-forecasting-energy-demand.ipynb](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) - use lags and rolling window features
- Dataset: [NYC energy demand data](forecasting-a/nyc_energy.csv) - view the featurization steps that were applied during training
- Example of using automated ML for training a forecasting model - get the best model, use it to forecast on test data and compare the accuracy of predictions against real data
- **Forecast Orange Juice Sales (Multi-Series)**
- [auto-ml-classification-credit-card-fraud-local.ipynb](local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) - Dataset: [Dominick's grocery sales of orange juice](forecasting-orange-juice-sales/dominicks_OJ.csv)
- Dataset: Kaggle's [credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud) - **[Jupyter Notebook](forecasting-orange-juice-sales/dominicks_OJ.csv)**
- Simple example of using automated ML for classification to fraudulent credit card transactions - run experiment remotely on AML Compute cluster
- Uses local compute for training - customize time-series featurization, change column purpose and override transformer hyper parameters
- evaluate locally the performance of the generated best model
- [auto-ml-classification-bank-marketing-all-features.ipynb](classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb) - deploy the best model as a webservice on Azure Container Instance (ACI)
- Dataset: UCI's [bank marketing dataset](https://www.kaggle.com/janiobachmann/bank-marketing-dataset) - get online predictions from the deployed model
- Simple example of using automated ML for classification to predict term deposit subscriptions for a bank - **Forecast Demand of a Bike-Sharing Service**
- Uses azure compute for training - Dataset: [Bike demand data](forecasting-bike-share/bike-no.csv)
- **[Jupyter Notebook](forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)**
- [auto-ml-forecasting-orange-juice-sales.ipynb](forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb) - run experiment remotely on AML Compute cluster
- Dataset: [Dominick's grocery sales of orange juice](forecasting-b/dominicks_OJ.csv) - integrate holiday features
- Example of training an automated ML forecasting model on multiple time-series - run rolling forecast for test set that is longer than the forecast horizon
- compute metrics on the predictions from the remote forecast
- [auto-ml-forecasting-bike-share.ipynb](forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) - **The Forecast Function Interface**
- Dataset: forecasting for a bike-sharing - Dataset: Generated for sample purposes
- Example of training an automated ML forecasting model on multiple time-series - **[Jupyter Notebook](forecasting-forecast-function/auto-ml-forecasting-function.ipynb)**
- train a forecaster using a remote AML Compute cluster
- [auto-ml-forecasting-function.ipynb](forecasting-forecast-function/auto-ml-forecasting-function.ipynb) - capabilities of forecast function (e.g. forecast farther into the horizon)
- Example of training an automated ML forecasting model on multiple time-series - generate confidence intervals
- **Forecast Beverage Production**
- [auto-ml-forecasting-beer-remote.ipynb](forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb) - Dataset: [Monthly beer production data](forecasting-beer-remote/Beer_no_valid_split_train.csv)
- Example of training an automated ML forecasting model on multiple time-series - **[Jupyter Notebook](forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb)**
- Beer Production Forecasting - train using a remote AML Compute cluster
- enable the DNN learning model
- [auto-ml-continuous-retraining.ipynb](continuous-retraining/auto-ml-continuous-retraining.ipynb) - forecast on a remote compute cluster and compare different model performance
- Continuous retraining using Pipelines and Time-Series TabularDataset - **Continuous Retraining with NOAA Weather Data**
- Dataset: [NOAA weather data from Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/)
- **[Jupyter Notebook](continuous-retraining/auto-ml-continuous-retraining.ipynb)**
- continuously retrain a model using Pipelines and AutoML
- create a Pipeline to upload a time series dataset to an Azure blob
- create a Pipeline to run an AutoML experiment and register the best resulting model in the Workspace
- publish the training pipeline created and schedule it to run daily
<a name="documentation"></a> <a name="documentation"></a>
See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments. See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments.

View File

@@ -9,7 +9,7 @@ dependencies:
- numpy==1.18.5 - numpy==1.18.5
- cython - cython
- urllib3<1.24 - urllib3<1.24
- scipy==1.4.1 - scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1 - scikit-learn==0.22.1
- pandas==0.25.1 - pandas==0.25.1
- py-xgboost<=0.90 - py-xgboost<=0.90
@@ -24,5 +24,5 @@ dependencies:
- pytorch-transformers==1.0.0 - pytorch-transformers==1.0.0
- spacy==2.1.8 - spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz - https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.16.0/validated_win32_requirements.txt [--no-deps] - -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_win32_requirements.txt [--no-deps]

View File

@@ -9,7 +9,7 @@ dependencies:
- numpy==1.18.5 - numpy==1.18.5
- cython - cython
- urllib3<1.24 - urllib3<1.24
- scipy==1.4.1 - scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1 - scikit-learn==0.22.1
- pandas==0.25.1 - pandas==0.25.1
- py-xgboost<=0.90 - py-xgboost<=0.90
@@ -24,5 +24,5 @@ dependencies:
- pytorch-transformers==1.0.0 - pytorch-transformers==1.0.0
- spacy==2.1.8 - spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz - https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.16.0/validated_linux_requirements.txt [--no-deps] - -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_linux_requirements.txt [--no-deps]

View File

@@ -10,7 +10,7 @@ dependencies:
- numpy==1.18.5 - numpy==1.18.5
- cython - cython
- urllib3<1.24 - urllib3<1.24
- scipy==1.4.1 - scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1 - scikit-learn==0.22.1
- pandas==0.25.1 - pandas==0.25.1
- py-xgboost<=0.90 - py-xgboost<=0.90
@@ -25,4 +25,4 @@ dependencies:
- pytorch-transformers==1.0.0 - pytorch-transformers==1.0.0
- spacy==2.1.8 - spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz - https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.16.0/validated_darwin_requirements.txt [--no-deps] - -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_darwin_requirements.txt [--no-deps]

View File

@@ -105,7 +105,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -93,7 +93,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -0,0 +1,592 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Text Classification Using Deep Learning**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Evaluate](#Evaluate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"This notebook demonstrates classification with text data using deep learning in AutoML.\n",
"\n",
"AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data. Depending on the compute cluster the user provides, AutoML tried out Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used, and Bidirectional Long-Short Term neural network (BiLSTM) when a CPU compute is used, thereby optimizing the choice of DNN for the uesr's setup.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade).\n",
"\n",
"Notebook synopsis:\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification\n",
"3. Registering the best model for future use\n",
"4. Evaluating the final model on a test set"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import shutil\n",
"\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.run import Run\n",
"from azureml.widgets import RunDetails\n",
"from azureml.core.model import Model \n",
"from helper import run_inference, get_result_df\n",
"from azureml.train.automl import AutoMLConfig\n",
"from sklearn.datasets import fetch_20newsgroups"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose an experiment name.\n",
"experiment_name = 'automl-classification-text-dnn'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up a compute cluster\n",
"This section uses a user-provided compute cluster (named \"dnntext-cluster\" in this example). If a cluster with this name does not exist in the user's workspace, the below code will create a new cluster. You can choose the parameters of the cluster as mentioned in the comments.\n",
"\n",
"Whether you provide/select a CPU or GPU cluster, AutoML will choose the appropriate DNN for that setup - BiLSTM or BERT text featurizer will be included in the candidate featurizers on CPU and GPU respectively. If your goal is to obtain the most accurate model, we recommend you use GPU clusters since BERT featurizers usually outperform BiLSTM featurizers."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"num_nodes = 2\n",
"\n",
"# Choose a name for your cluster.\n",
"amlcompute_cluster_name = \"dnntext-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_NC6\", # CPU for BiLSTM, such as \"STANDARD_D2_V2\" \n",
" # To use BERT (this is recommended for best performance), select a GPU such as \"STANDARD_NC6\" \n",
" # or similar GPU option\n",
" # available in your workspace\n",
" max_nodes = num_nodes)\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get data\n",
"For this notebook we will use 20 Newsgroups data from scikit-learn. We filter the data to contain four classes and take a sample as training data. Please note that for accuracy improvement, more data is needed. For this notebook we provide a small-data example so that you can use this template to use with your larger sized data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_dir = \"text-dnn-data\" # Local directory to store data\n",
"blobstore_datadir = data_dir # Blob store directory to store data in\n",
"target_column_name = 'y'\n",
"feature_column_name = 'X'\n",
"\n",
"def get_20newsgroups_data():\n",
" '''Fetches 20 Newsgroups data from scikit-learn\n",
" Returns them in form of pandas dataframes\n",
" '''\n",
" remove = ('headers', 'footers', 'quotes')\n",
" categories = [\n",
" 'rec.sport.baseball',\n",
" 'rec.sport.hockey',\n",
" 'comp.graphics',\n",
" 'sci.space',\n",
" ]\n",
"\n",
" data = fetch_20newsgroups(subset = 'train', categories = categories,\n",
" shuffle = True, random_state = 42,\n",
" remove = remove)\n",
" data = pd.DataFrame({feature_column_name: data.data, target_column_name: data.target})\n",
"\n",
" data_train = data[:200]\n",
" data_test = data[200:300] \n",
"\n",
" data_train = remove_blanks_20news(data_train, feature_column_name, target_column_name)\n",
" data_test = remove_blanks_20news(data_test, feature_column_name, target_column_name)\n",
" \n",
" return data_train, data_test\n",
" \n",
"def remove_blanks_20news(data, feature_column_name, target_column_name):\n",
" \n",
" data[feature_column_name] = data[feature_column_name].replace(r'\\n', ' ', regex=True).apply(lambda x: x.strip())\n",
" data = data[data[feature_column_name] != '']\n",
" \n",
" return data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Fetch data and upload to datastore for use in training"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_train, data_test = get_20newsgroups_data()\n",
"\n",
"if not os.path.isdir(data_dir):\n",
" os.mkdir(data_dir)\n",
" \n",
"train_data_fname = data_dir + '/train_data.csv'\n",
"test_data_fname = data_dir + '/test_data.csv'\n",
"\n",
"data_train.to_csv(train_data_fname, index=False)\n",
"data_test.to_csv(test_data_fname, index=False)\n",
"\n",
"datastore = ws.get_default_datastore()\n",
"datastore.upload(src_dir=data_dir, target_path=blobstore_datadir,\n",
" overwrite=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/train_data.csv')])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare AutoML run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade).\n",
"\n",
"This notebook uses the blocked_models parameter to exclude some models that can take a longer time to train on some text datasets. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"experiment_timeout_minutes\": 20,\n",
" \"primary_metric\": 'accuracy',\n",
" \"max_concurrent_iterations\": num_nodes, \n",
" \"max_cores_per_iteration\": -1,\n",
" \"enable_dnn\": True,\n",
" \"enable_early_stopping\": True,\n",
" \"validation_size\": 0.3,\n",
" \"verbosity\": logging.INFO,\n",
" \"enable_voting_ensemble\": False,\n",
" \"enable_stack_ensemble\": False,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" compute_target=compute_target,\n",
" training_data=train_dataset,\n",
" label_column_name=target_column_name,\n",
" blocked_models = ['LightGBM'],\n",
" **automl_settings\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit AutoML Run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"Below we select the best model pipeline from our iterations, use it to test on test data on the same compute cluster."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can test the model locally to get a feel of the input/output. When the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your MachineLearningNotebooks folder here:\n",
"MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl_env.yml"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = automl_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now see what text transformations are used to convert text data to features for this dataset, including deep learning transformations based on BiLSTM or Transformer (BERT is one implementation of a Transformer) models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text_transformations_used = []\n",
"for column_group in fitted_model.named_steps['datatransformer'].get_featurization_summary():\n",
" text_transformations_used.extend(column_group['Transformations'])\n",
"text_transformations_used"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Registering the best model\n",
"We now register the best fitted model from the AutoML Run for use in future deployments. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get results stats, extract the best model from AutoML run, download and register the resultant best model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"summary_df = get_result_df(automl_run)\n",
"best_dnn_run_id = summary_df['run_id'].iloc[0]\n",
"best_dnn_run = Run(experiment, best_dnn_run_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_dir = 'Model' # Local folder where the model will be stored temporarily\n",
"if not os.path.isdir(model_dir):\n",
" os.mkdir(model_dir)\n",
" \n",
"best_dnn_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Register the model in your Azure Machine Learning Workspace. If you previously registered a model, please make sure to delete it so as to replace it with this new model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Register the model\n",
"model_name = 'textDNN-20News'\n",
"model = Model.register(model_path = model_dir + '/model.pkl',\n",
" model_name = model_name,\n",
" tags=None,\n",
" workspace=ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Evaluate on Test Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now use the best fitted model from the AutoML Run to make predictions on the test set. \n",
"\n",
"Test set schema should match that of the training set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/test_data.csv')])\n",
"\n",
"# preview the first 3 rows of the dataset\n",
"test_dataset.take(3).to_pandas_dataframe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_experiment = Experiment(ws, experiment_name + \"_test\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"script_folder = os.path.join(os.getcwd(), 'inference')\n",
"os.makedirs(script_folder, exist_ok=True)\n",
"shutil.copy('infer.py', script_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_run = run_inference(test_experiment, compute_target, script_folder, best_dnn_run,\n",
" train_dataset, test_dataset, target_column_name, model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Display computed metrics"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"RunDetails(test_run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_run.wait_for_completion()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pd.Series(test_run.get_metrics())"
]
}
],
"metadata": {
"authors": [
{
"name": "anshirga"
}
],
"compute": [
"AML Compute"
],
"datasets": [
"None"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"framework": [
"None"
],
"friendly_name": "DNN Text Featurization",
"index_order": 2,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
},
"tags": [
"None"
],
"task": "Text featurization using DNNs for classification"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,4 @@
name: auto-ml-classification-text-dnn
dependencies:
- pip:
- azureml-sdk

View File

@@ -0,0 +1,56 @@
import pandas as pd
from azureml.core import Environment
from azureml.train.estimator import Estimator
from azureml.core.run import Run
def run_inference(test_experiment, compute_target, script_folder, train_run,
train_dataset, test_dataset, target_column_name, model_name):
inference_env = train_run.get_environment()
est = Estimator(source_directory=script_folder,
entry_script='infer.py',
script_params={
'--target_column_name': target_column_name,
'--model_name': model_name
},
inputs=[
train_dataset.as_named_input('train_data'),
test_dataset.as_named_input('test_data')
],
compute_target=compute_target,
environment_definition=inference_env)
run = test_experiment.submit(
est, tags={
'training_run_id': train_run.id,
'run_algorithm': train_run.properties['run_algorithm'],
'valid_score': train_run.properties['score'],
'primary_metric': train_run.properties['primary_metric']
})
run.log("run_algorithm", run.tags['run_algorithm'])
return run
def get_result_df(remote_run):
children = list(remote_run.get_children(recursive=True))
summary_df = pd.DataFrame(index=['run_id', 'run_algorithm',
'primary_metric', 'Score'])
goal_minimize = False
for run in children:
if('run_algorithm' in run.properties and 'score' in run.properties):
summary_df[run.id] = [run.id, run.properties['run_algorithm'],
run.properties['primary_metric'],
float(run.properties['score'])]
if('goal' in run.properties):
goal_minimize = run.properties['goal'].split('_')[-1] == 'min'
summary_df = summary_df.T.sort_values(
'Score',
ascending=goal_minimize).drop_duplicates(['run_algorithm'])
summary_df = summary_df.set_index('run_algorithm')
return summary_df

View File

@@ -0,0 +1,60 @@
import argparse
import numpy as np
from sklearn.externals import joblib
from azureml.automl.runtime.shared.score import scoring, constants
from azureml.core import Run
from azureml.core.model import Model
parser = argparse.ArgumentParser()
parser.add_argument(
'--target_column_name', type=str, dest='target_column_name',
help='Target Column Name')
parser.add_argument(
'--model_name', type=str, dest='model_name',
help='Name of registered model')
args = parser.parse_args()
target_column_name = args.target_column_name
model_name = args.model_name
print('args passed are: ')
print('Target column name: ', target_column_name)
print('Name of registered model: ', model_name)
model_path = Model.get_model_path(model_name)
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
run = Run.get_context()
# get input dataset by name
test_dataset = run.input_datasets['test_data']
train_dataset = run.input_datasets['train_data']
X_test_df = test_dataset.drop_columns(columns=[target_column_name]) \
.to_pandas_dataframe()
y_test_df = test_dataset.with_timestamp_columns(None) \
.keep_columns(columns=[target_column_name]) \
.to_pandas_dataframe()
y_train_df = test_dataset.with_timestamp_columns(None) \
.keep_columns(columns=[target_column_name]) \
.to_pandas_dataframe()
predicted = model.predict_proba(X_test_df)
# Use the AutoML scoring module
class_labels = np.unique(np.concatenate((y_train_df.values, y_test_df.values)))
train_labels = model.classes_
classification_metrics = list(constants.CLASSIFICATION_SCALAR_SET)
scores = scoring.score_classification(y_test_df.values, predicted,
classification_metrics,
class_labels, train_labels)
print("scores:")
print(scores)
for key, value in scores.items():
run.log(key, value)

View File

@@ -88,7 +88,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -92,7 +92,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -114,7 +114,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -1,4 +1,5 @@
import argparse import argparse
import os
import numpy as np import numpy as np
import pandas as pd import pandas as pd
@@ -10,6 +11,13 @@ from sklearn.metrics import mean_absolute_error, mean_squared_error
from azureml.automl.runtime.shared.score import scoring, constants from azureml.automl.runtime.shared.score import scoring, constants
from azureml.core import Run from azureml.core import Run
try:
import torch
_torch_present = True
except ImportError:
_torch_present = False
def align_outputs(y_predicted, X_trans, X_test, y_test, def align_outputs(y_predicted, X_trans, X_test, y_test,
predicted_column_name='predicted', predicted_column_name='predicted',
@@ -48,7 +56,7 @@ def align_outputs(y_predicted, X_trans, X_test, y_test,
# or at edges of time due to lags/rolling windows # or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, clean = together[together[[target_column_name,
predicted_column_name]].notnull().all(axis=1)] predicted_column_name]].notnull().all(axis=1)]
return(clean) return (clean)
def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test, def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
@@ -83,8 +91,7 @@ def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
if origin_time != X[time_column_name].min(): if origin_time != X[time_column_name].min():
# Set the context by including actuals up-to the origin time # Set the context by including actuals up-to the origin time
test_context_expand_wind = (X[time_column_name] < origin_time) test_context_expand_wind = (X[time_column_name] < origin_time)
context_expand_wind = ( context_expand_wind = (X_test_expand[time_column_name] < origin_time)
X_test_expand[time_column_name] < origin_time)
y_query_expand[context_expand_wind] = y[test_context_expand_wind] y_query_expand[context_expand_wind] = y[test_context_expand_wind]
# Print some debug info # Print some debug info
@@ -115,8 +122,7 @@ def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
# Align forecast with test set for dates within # Align forecast with test set for dates within
# the current rolling window # the current rolling window
trans_tindex = X_trans.index.get_level_values(time_column_name) trans_tindex = X_trans.index.get_level_values(time_column_name)
trans_roll_wind = (trans_tindex >= origin_time) & ( trans_roll_wind = (trans_tindex >= origin_time) & (trans_tindex < horizon_time)
trans_tindex < horizon_time)
test_roll_wind = expand_wind & (X[time_column_name] >= origin_time) test_roll_wind = expand_wind & (X[time_column_name] >= origin_time)
df_list.append(align_outputs( df_list.append(align_outputs(
y_fcst[trans_roll_wind], X_trans[trans_roll_wind], y_fcst[trans_roll_wind], X_trans[trans_roll_wind],
@@ -155,8 +161,7 @@ def do_rolling_forecast(fitted_model, X_test, y_test, max_horizon, freq='D'):
if origin_time != X_test[time_column_name].min(): if origin_time != X_test[time_column_name].min():
# Set the context by including actuals up-to the origin time # Set the context by including actuals up-to the origin time
test_context_expand_wind = (X_test[time_column_name] < origin_time) test_context_expand_wind = (X_test[time_column_name] < origin_time)
context_expand_wind = ( context_expand_wind = (X_test_expand[time_column_name] < origin_time)
X_test_expand[time_column_name] < origin_time)
y_query_expand[context_expand_wind] = y_test[ y_query_expand[context_expand_wind] = y_test[
test_context_expand_wind] test_context_expand_wind]
@@ -186,10 +191,8 @@ def do_rolling_forecast(fitted_model, X_test, y_test, max_horizon, freq='D'):
# Align forecast with test set for dates within the # Align forecast with test set for dates within the
# current rolling window # current rolling window
trans_tindex = X_trans.index.get_level_values(time_column_name) trans_tindex = X_trans.index.get_level_values(time_column_name)
trans_roll_wind = (trans_tindex >= origin_time) & ( trans_roll_wind = (trans_tindex >= origin_time) & (trans_tindex < horizon_time)
trans_tindex < horizon_time) test_roll_wind = expand_wind & (X_test[time_column_name] >= origin_time)
test_roll_wind = expand_wind & (
X_test[time_column_name] >= origin_time)
df_list.append(align_outputs(y_fcst[trans_roll_wind], df_list.append(align_outputs(y_fcst[trans_roll_wind],
X_trans[trans_roll_wind], X_trans[trans_roll_wind],
X_test[test_roll_wind], X_test[test_roll_wind],
@@ -221,6 +224,10 @@ def MAPE(actual, pred):
return np.mean(APE(actual_safe, pred_safe)) return np.mean(APE(actual_safe, pred_safe))
def map_location_cuda(storage, loc):
return storage.cuda()
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument( parser.add_argument(
'--max_horizon', type=int, dest='max_horizon', '--max_horizon', type=int, dest='max_horizon',
@@ -238,7 +245,6 @@ parser.add_argument(
'--model_path', type=str, dest='model_path', '--model_path', type=str, dest='model_path',
default='model.pkl', help='Filename of model to be loaded') default='model.pkl', help='Filename of model to be loaded')
args = parser.parse_args() args = parser.parse_args()
max_horizon = args.max_horizon max_horizon = args.max_horizon
target_column_name = args.target_column_name target_column_name = args.target_column_name
@@ -246,7 +252,6 @@ time_column_name = args.time_column_name
freq = args.freq freq = args.freq
model_path = args.model_path model_path = args.model_path
print('args passed are: ') print('args passed are: ')
print(max_horizon) print(max_horizon)
print(target_column_name) print(target_column_name)
@@ -274,8 +279,19 @@ X_lookback_df = lookback_dataset.drop_columns(columns=[target_column_name])
y_lookback_df = lookback_dataset.with_timestamp_columns( y_lookback_df = lookback_dataset.with_timestamp_columns(
None).keep_columns(columns=[target_column_name]) None).keep_columns(columns=[target_column_name])
fitted_model = joblib.load(model_path) _, ext = os.path.splitext(model_path)
if ext == '.pt':
# Load the fc-tcn torch model.
assert _torch_present
if torch.cuda.is_available():
map_location = map_location_cuda
else:
map_location = 'cpu'
with open(model_path, 'rb') as fh:
fitted_model = torch.load(fh, map_location=map_location)
else:
# Load the sklearn pipeline.
fitted_model = joblib.load(model_path)
if hasattr(fitted_model, 'get_lookback'): if hasattr(fitted_model, 'get_lookback'):
lookback = fitted_model.get_lookback() lookback = fitted_model.get_lookback()

View File

@@ -87,7 +87,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -97,7 +97,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -94,7 +94,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -82,7 +82,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -96,7 +96,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -98,7 +98,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -92,7 +92,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -17,9 +17,9 @@
"\n", "\n",
"**For Databricks non ML runtime 7.1(scala 2.21, spark 3.0.0) and up, Install AML sdk by running the following command in the first cell of the notebook.**\n", "**For Databricks non ML runtime 7.1(scala 2.21, spark 3.0.0) and up, Install AML sdk by running the following command in the first cell of the notebook.**\n",
"\n", "\n",
"%pip install -r https://aka.ms/automl_linux_requirements.txt\n", "%pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt\n",
"\n", "\n",
"**For Databricks non ML runtime 7.0 and lower, Install AML sdk using init script as shown in [readme](readme.md) before running this notebook.**\n" "**For Databricks non ML runtime 7.0 and lower, Install AML sdk using init script as shown in [readme](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-databricks/automl/README.md) before running this notebook.**\n"
] ]
}, },
{ {

View File

@@ -604,4 +604,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 1 "nbformat_minor": 1
} }

View File

@@ -44,9 +44,11 @@
"import azureml.core\n", "import azureml.core\n",
"from azureml.core import Workspace, Experiment, Datastore, Dataset\n", "from azureml.core import Workspace, Experiment, Datastore, Dataset\n",
"from azureml.core.compute import ComputeTarget, AmlCompute\n", "from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.exceptions import ComputeTargetException\n", "from azureml.exceptions import ComputeTargetException\n",
"from azureml.pipeline.steps import HyperDriveStep, HyperDriveStepRun\n", "from azureml.pipeline.steps import HyperDriveStep, HyperDriveStepRun, PythonScriptStep\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n", "from azureml.pipeline.core import Pipeline, PipelineData, TrainingOutput\n",
"from azureml.train.dnn import TensorFlow\n", "from azureml.train.dnn import TensorFlow\n",
"# from azureml.train.hyperdrive import *\n", "# from azureml.train.hyperdrive import *\n",
"from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal\n", "from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal\n",
@@ -232,7 +234,22 @@
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
" compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n", " compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n",
"\n", "\n",
"print(\"Azure Machine Learning Compute attached\")" "print(\"Azure Machine Learning Compute attached\")\n",
"\n",
"cpu_cluster_name = \"cpu-cluster\"\n",
"\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print(\"Found existing cpu-cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new cpu-cluster\")\n",
" \n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_D2_V2\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
" \n",
" cpu_cluster.wait_for_completion(show_output=True)"
] ]
}, },
{ {
@@ -400,8 +417,16 @@
"source": [ "source": [
"metrics_output_name = 'metrics_output'\n", "metrics_output_name = 'metrics_output'\n",
"metrics_data = PipelineData(name='metrics_data',\n", "metrics_data = PipelineData(name='metrics_data',\n",
" datastore=datastore,\n", " datastore=datastore,\n",
" pipeline_output_name=metrics_output_name)\n", " pipeline_output_name=metrics_output_name,\n",
" training_output=TrainingOutput(\"Metrics\"))\n",
"\n",
"model_output_name = 'model_output'\n",
"saved_model = PipelineData(name='saved_model',\n",
" datastore=datastore,\n",
" pipeline_output_name=model_output_name,\n",
" training_output=TrainingOutput(\"Model\",\n",
" model_file=\"outputs/model/saved_model.pb\"))\n",
"\n", "\n",
"hd_step_name='hd_step01'\n", "hd_step_name='hd_step01'\n",
"hd_step = HyperDriveStep(\n", "hd_step = HyperDriveStep(\n",
@@ -409,7 +434,39 @@
" hyperdrive_config=hd_config,\n", " hyperdrive_config=hd_config,\n",
" estimator_entry_script_arguments=['--data-folder', data_folder],\n", " estimator_entry_script_arguments=['--data-folder', data_folder],\n",
" inputs=[data_folder],\n", " inputs=[data_folder],\n",
" metrics_output=metrics_data)" " outputs=[metrics_data, saved_model])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Find and register best model\n",
"When all the jobs finish, we can choose to register the model that has the highest accuracy through an additional PythonScriptStep.\n",
"\n",
"Through this additional register_model_step, we register the chosen files as a model named `tf-dnn-mnist` under the workspace for deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"conda_dep = CondaDependencies()\n",
"conda_dep.add_pip_package(\"azureml-sdk\")\n",
"\n",
"rcfg = RunConfiguration(conda_dependencies=conda_dep)\n",
"\n",
"register_model_step = PythonScriptStep(script_name='register_model.py',\n",
" name=\"register_model_step01\",\n",
" inputs=[saved_model],\n",
" compute_target=cpu_cluster,\n",
" arguments=[\"--saved-model\", saved_model],\n",
" allow_reuse=True,\n",
" runconfig=rcfg)\n",
"\n",
"register_model_step.run_after(hd_step)"
] ]
}, },
{ {
@@ -425,7 +482,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"pipeline = Pipeline(workspace=ws, steps=[hd_step])\n", "pipeline = Pipeline(workspace=ws, steps=[hd_step, register_model_step])\n",
"pipeline_run = exp.submit(pipeline)" "pipeline_run = exp.submit(pipeline)"
] ]
}, },
@@ -500,58 +557,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Find and register best model\n", "For model deployment, please refer to [Training, hyperparameter tune, and deploy with TensorFlow](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb)."
"When all the jobs finish, we can find out the one that has the highest accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"hd_step_run = HyperDriveStepRun(step_run=pipeline_run.find_step_run(hd_step_name)[0])\n",
"best_run = hd_step_run.get_best_run_by_primary_metric()\n",
"best_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's list the model files uploaded during the run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(best_run.get_file_names())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can then register the folder (and all files in it) as a model named `tf-dnn-mnist` under the workspace for deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model = best_run.register_model(model_name='tf-dnn-mnist', model_path='outputs/model')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For model deployment, please refer to [Training, hyperparameter tune, and deploy with TensorFlow](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/deployment/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb)."
] ]
} }
], ],

View File

@@ -0,0 +1,21 @@
import argparse
import json
import os
import azureml.core
from azureml.core import Workspace, Experiment, Model
from azureml.core import Run
from azureml.train.hyperdrive import HyperDriveRun
from shutil import copy2
parser = argparse.ArgumentParser()
parser.add_argument('--saved-model', type=str, dest='saved_model', help='path to saved model file')
args = parser.parse_args()
model_output_dir = './model/'
os.makedirs(model_output_dir, exist_ok=True)
copy2(args.saved_model, model_output_dir)
ws = Run.get_context().experiment.workspace
model = Model.register(workspace=ws, model_name='tf-dnn-mnist', model_path=model_output_dir)

View File

@@ -100,7 +100,7 @@
"\n", "\n",
"# Check core SDK version number\n", "# Check core SDK version number\n",
"\n", "\n",
"print(\"This notebook was created using SDK version 1.16.0, you are currently running version\", azureml.core.VERSION)" "print(\"This notebook was created using SDK version 1.17.0, you are currently running version\", azureml.core.VERSION)"
] ]
}, },
{ {

View File

@@ -97,6 +97,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
## Other Notebooks ## Other Notebooks
|Title| Task | Dataset | Training Compute | Deployment Target | ML Framework | Tags | |Title| Task | Dataset | Training Compute | Deployment Target | ML Framework | Tags |
|:----|:-----|:-------:|:----------------:|:-----------------:|:------------:|:------------:| |:----|:-----|:-------:|:----------------:|:-----------------:|:------------:|:------------:|
| [DNN Text Featurization](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb) | Text featurization using DNNs for classification | None | AML Compute | None | None | None |
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) | | | | | | | | [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) | | | | | | |
| [fairlearn-azureml-mitigation](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/fairlearn-azureml-mitigation.ipynb) | | | | | | | | [fairlearn-azureml-mitigation](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/fairlearn-azureml-mitigation.ipynb) | | | | | | |
| [upload-fairness-dashboard](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/upload-fairness-dashboard.ipynb) | | | | | | | | [upload-fairness-dashboard](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/upload-fairness-dashboard.ipynb) | | | | | | |
@@ -140,4 +141,5 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [img-classification-part2-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb) | | | | | | | | [img-classification-part2-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb) | | | | | | |
| [img-classification-part3-deploy-encrypted](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part3-deploy-encrypted.ipynb) | | | | | | | | [img-classification-part3-deploy-encrypted](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part3-deploy-encrypted.ipynb) | | | | | | |
| [tutorial-pipeline-batch-scoring-classification](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/machine-learning-pipelines-advanced/tutorial-pipeline-batch-scoring-classification.ipynb) | | | | | | | | [tutorial-pipeline-batch-scoring-classification](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/machine-learning-pipelines-advanced/tutorial-pipeline-batch-scoring-classification.ipynb) | | | | | | |
| [azureml-quickstart](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/quickstart/azureml-quickstart.ipynb) | | | | | | |
| [regression-automated-ml](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb) | | | | | | | | [regression-automated-ml](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb) | | | | | | |

View File

@@ -102,7 +102,7 @@
"source": [ "source": [
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -16,6 +16,7 @@ The following tutorials are intended to provide an introductory overview of Azur
| Tutorial | Description | Notebook | Task | Framework | | Tutorial | Description | Notebook | Task | Framework |
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
| Azure Machine Learning in 10 minutes | Learn how to create and attach compute instances to notebooks, run an image classification model, track model metrics, and deploy a model| [quickstart](quickstart/azureml-quickstart.ipynb) | Learn Azure Machine Learning Concepts | PyTorch
| [Get Started (day1)](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) | Learn the fundamental concepts of Azure Machine Learning to help onboard your existing code to Azure Machine Learning. This tutorial focuses heavily on submitting machine learning jobs to scalable cloud-based compute clusters. | [get-started-day1](get-started-day1/day1-part1-setup.ipynb) | Learn Azure Machine Learning Concepts | PyTorch | [Get Started (day1)](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) | Learn the fundamental concepts of Azure Machine Learning to help onboard your existing code to Azure Machine Learning. This tutorial focuses heavily on submitting machine learning jobs to scalable cloud-based compute clusters. | [get-started-day1](get-started-day1/day1-part1-setup.ipynb) | Learn Azure Machine Learning Concepts | PyTorch
| [Train your first ML Model](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-train) | Learn the foundational design patterns in Azure Machine Learning and train a scikit-learn model based on a diabetes data set. | [tutorial-quickstart-train-model.ipynb](create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | Regression | Scikit-Learn | [Train your first ML Model](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-train) | Learn the foundational design patterns in Azure Machine Learning and train a scikit-learn model based on a diabetes data set. | [tutorial-quickstart-train-model.ipynb](create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | Regression | Scikit-Learn
| [Train an image classification model](https://docs.microsoft.com/azure/machine-learning/tutorial-train-models-with-aml) | Train a scikit-learn image classification model. | [img-classification-part1-training.ipynb](image-classification-mnist-data/img-classification-part1-training.ipynb) | Image Classification | Scikit-Learn | [Train an image classification model](https://docs.microsoft.com/azure/machine-learning/tutorial-train-models-with-aml) | Train a scikit-learn image classification model. | [img-classification-part1-training.ipynb](image-classification-mnist-data/img-classification-part1-training.ipynb) | Image Classification | Scikit-Learn

View File

@@ -246,7 +246,7 @@
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"ct = ws.compute_targets['cpu-cluster']\n", "ct = ws.compute_targets['cpu-cluster']\n",
"ct.delete()" "# ct.delete()"
] ]
}, },
{ {

View File

@@ -0,0 +1,482 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/tutorials/quickstart/azureml-quickstart.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial: Azure Machine Learning Quickstart\n",
"\n",
"In this tutorial, you learn how to quickly get started with Azure Machine Learning. Using a *compute instance* - a fully managed cloud-based VM that is pre-configured with the latest data science tools - you will train an image classification model using the CIFAR10 dataset.\n",
"\n",
"In this tutorial you will learn how to:\n",
"\n",
"* Create a compute instance and attach to a notebook\n",
"* Train an image classification model and log metrics\n",
"* Deploy the model\n",
"\n",
"## Prerequisites\n",
"\n",
"1. An Azure Machine Learning workspace\n",
"1. Familiar with the Python language and machine learning workflows.\n",
"\n",
"\n",
"## Create compute & attach to notebook\n",
"\n",
"To run this notebook you will need to create an Azure Machine Learning _compute instance_. The benefits of a compute instance over a local machine (e.g. laptop) or cloud VM are as follows:\n",
"\n",
"* It is a pre-configured with all the latest data science libaries (e.g. panads, scikit, TensorFlow, PyTorch) and tools (Jupyter, RStudio). In this tutorial we make extensive use of PyTorch, AzureML SDK, matplotlib and we do not need to install these components on a compute instance.\n",
"* Notebooks are seperate from the compute instance - this means that you can develop your notebook on a small VM size, and then seamlessly scale up (and/or use a GPU-enabled) the machine when needed to train a model.\n",
"* You can easily turn on/off the instance to control costs. \n",
"\n",
"To create compute, click on the + button at the top of the notebook viewer in Azure Machine Learning Studio:\n",
"\n",
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create.PNG\" width=\"500\"/>\n",
"\n",
"This will pop up the __New compute instance__ blade, provide a valid __Compute name__ (valid characters are upper and lower case letters, digits, and the - character). Then click on __Create__. \n",
"\n",
"It will take approximately 3 minutes for the compute to be ready. When the compute is ready you will see a green light next to the compute name at the top of the notebook viewer:\n",
"\n",
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create2.PNG\" width=\"500\"/>\n",
"\n",
"You will also notice that the notebook is attached to the __Python 3.6 - AzureML__ jupyter Kernel. Other kernels can be selected such as R. In addition, if you did have other instances you can switch to them by simply using the dropdown menu next to the Compute label.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Data\n",
"\n",
"For this tutorial, you will use the CIFAR10 dataset. It has the classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The images in CIFAR-10 three-channel color images of 32x32 pixels in size.\n",
"\n",
"The code cell below uses the PyTorch API to download the data to your compute instance, which should be quick (around 15 seconds). The data is divided into training and test sets.\n",
"\n",
"* **NOTE: The data is downloaded to the compute instance (in the `/tmp` directory) and not a durable cloud-based store like Azure Blob Storage or Azure Data Lake. This means if you delete the compute instance the data will be lost. The [getting started with Azure Machine Learning tutorial series](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) shows how to create an Azure Machine Learning *dataset*, which aids durability, versioning, and collaboration.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600881820920
}
},
"outputs": [],
"source": [
"import torch\n",
"import torch.optim as optim\n",
"import torchvision\n",
"import torchvision.transforms as transforms\n",
"\n",
"transform = transforms.Compose(\n",
" [transforms.ToTensor(),\n",
" transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n",
"\n",
"trainset = torchvision.datasets.CIFAR10(root='/tmp/data', train=True,\n",
" download=True, transform=transform)\n",
"trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,\n",
" shuffle=True, num_workers=2)\n",
"\n",
"testset = torchvision.datasets.CIFAR10(root='/tmp/data', train=False,\n",
" download=True, transform=transform)\n",
"testloader = torch.utils.data.DataLoader(testset, batch_size=4,\n",
" shuffle=False, num_workers=2)\n",
"\n",
"classes = ('plane', 'car', 'bird', 'cat',\n",
" 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Take a look at the data\n",
"In the following cell, you have some python code that displays the first batch of 4 CIFAR10 images:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600882160868
}
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"def imshow(img):\n",
" img = img / 2 + 0.5 # unnormalize\n",
" npimg = img.numpy()\n",
" plt.imshow(np.transpose(npimg, (1, 2, 0)))\n",
" plt.show()\n",
"\n",
"\n",
"# get some random training images\n",
"dataiter = iter(trainloader)\n",
"images, labels = dataiter.next()\n",
"\n",
"# show images\n",
"imshow(torchvision.utils.make_grid(images))\n",
"# print labels\n",
"print(' '.join('%5s' % classes[labels[j]] for j in range(4)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train model and log metrics\n",
"\n",
"In the directory `model` you will see a file called [model.py](./model/model.py) that defines the neural network architecture. The model is trained using the code below.\n",
"\n",
"* **Note: The model training take around 4 minutes to complete. The benefit of a compute instance is that the notebooks are separate from the compute - therefore you can easily switch to a different size/type of instance. For example, you could switch to run this training on a GPU-based compute instance if you had one provisioned. In the code below you can see that we have included `torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")`, which detects whether you are using a CPU or GPU machine.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600882387754
},
"tags": [
"local run"
]
},
"outputs": [],
"source": [
"from model.model import Net\n",
"from azureml.core import Experiment\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"\n",
"device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
"device\n",
"\n",
"exp = Experiment(workspace=ws, name=\"cifar10-experiment\")\n",
"run = exp.start_logging(snapshot_directory=None)\n",
"\n",
"# define convolutional network\n",
"net = Net()\n",
"net.to(device)\n",
"\n",
"# set up pytorch loss / optimizer\n",
"criterion = torch.nn.CrossEntropyLoss()\n",
"optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)\n",
"\n",
"run.log(\"learning rate\", 0.001)\n",
"run.log(\"momentum\", 0.9)\n",
"\n",
"# train the network\n",
"for epoch in range(1):\n",
" running_loss = 0.0\n",
" for i, data in enumerate(trainloader, 0):\n",
" # unpack the data\n",
" inputs, labels = data[0].to(device), data[1].to(device)\n",
"\n",
" # zero the parameter gradients\n",
" optimizer.zero_grad()\n",
"\n",
" # forward + backward + optimize\n",
" outputs = net(inputs)\n",
" loss = criterion(outputs, labels)\n",
" loss.backward()\n",
" optimizer.step()\n",
"\n",
" # print statistics\n",
" running_loss += loss.item()\n",
" if i % 2000 == 1999:\n",
" loss = running_loss / 2000\n",
" run.log(\"loss\", loss)\n",
" print(f'epoch={epoch + 1}, batch={i + 1:5}: loss {loss:.2f}')\n",
" running_loss = 0.0\n",
"\n",
"print('Finished Training')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you have executed the cell below you can view the metrics updating in real time in the Azure Machine Learning studio:\n",
"\n",
"1. Select **Experiments** (left-hand menu)\n",
"1. Select **cifar10-experiment**\n",
"1. Select **Run 1**\n",
"1. Select the **Metrics** Tab\n",
"\n",
"The metrics tab will display the following graph:\n",
"\n",
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/metrics-capture.PNG\" alt=\"dataset details\" width=\"500\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Understand the code\n",
"\n",
"The code is based on the [Pytorch 60minute Blitz](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py) where we have also added a few additional lines of code to track the loss metric as the neural network trains.\n",
"\n",
"| Code | Description | \n",
"| ------------- | ---------- |\n",
"| `experiment = Experiment( ... )` | [Experiment](https://docs.microsoft.com/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py&preserve-view=true) provides a simple way to organize multiple runs under a single name. Later you can see how experiments make it easy to compare metrics between dozens of runs. |\n",
"| `run.log()` | This will log the metrics to Azure Machine Learning. |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Version control models with the Model Registry\n",
"\n",
"You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. Azure Machine Learning supports any model that can be loaded through Python 3.\n",
"\n",
"The code below does:\n",
"\n",
"1. Saves the model on the compute instance\n",
"1. Uploads the model file to the run (if you look in the experiment on Azure Machine Learning studio you should see on the **Outputs + logs** tab the model has been saved in the run)\n",
"1. Registers the uploaded model file\n",
"1. Transitions the run to a completed state"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600888071066
},
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"from azureml.core import Model\n",
"\n",
"PATH = 'cifar_net.pth'\n",
"torch.save(net.state_dict(), PATH)\n",
"\n",
"run.upload_file(name=PATH, path_or_stream=PATH)\n",
"model = run.register_model(model_name='cifar10-model', \n",
" model_path=PATH,\n",
" model_framework=Model.Framework.PYTORCH,\n",
" description='cifar10 model')\n",
" \n",
"run.complete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View model in the model registry\n",
"\n",
"You can see the stored model by navigating to **Models** in the left-hand menu bar of Azure Machine Learning Studio. Click on the **cifar10-model** and you can see the details of the model like the experiement run id that created the model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy the model\n",
"\n",
"The next cell deploys the model to an Azure Container Instance so that you can score data in real-time (Azure Machine Learning also provides mechanisms to do batch scoring). A real-time endpoint allows application developers to integrate machine learning into their apps.\n",
"\n",
"* **Note: The deployment takes around 3 minutes to complete.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core import Environment, Model\n",
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.webservice import AciWebservice\n",
"\n",
"environment = Environment.get(ws, \"AzureML-PyTorch-1.6-CPU\")\n",
"model = Model(ws, \"cifar10-model\")\n",
"\n",
"service_name = 'cifar-service'\n",
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
"aci_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
"\n",
"service = Model.deploy(workspace=ws,\n",
" name=service_name,\n",
" models=[model],\n",
" inference_config=inference_config,\n",
" deployment_config=aci_config,\n",
" overwrite=True)\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Understand the code\n",
"\n",
"| Code | Description | \n",
"| ------------- | ---------- |\n",
"| `environment = Environment.get()` | [Environment](https://docs.microsoft.com/python/api/overview/azure/ml/?view=azure-ml-py#environment) specify the Python packages, environment variables, and software settings around your training and scoring scripts. In this case, you are using a *curated environment* that has all the packages to run PyTorch. |\n",
"| `inference_config = InferenceConfig()` | This specifies the inference (scoring) configuration for the deployment such as the script to use when scoring (see below) and on what environment. |\n",
"| `service = Model.deploy()` | Deploy the model. |\n",
"\n",
"The [*scoring script*](score.py) file is has two functions:\n",
"\n",
"1. an `init` function that executes once when the service starts - in this function you normally get the model from the registry and set global variables\n",
"1. a `run(data)` function that executes each time a call is made to the service. In this function, you normally deserialize the json, run a prediction and output the predicted result.\n",
"\n",
"\n",
"## Test the model service\n",
"\n",
"In the next cell, you get some unseen data from the test loader:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataiter = iter(testloader)\n",
"images, labels = dataiter.next()\n",
"\n",
"# print images\n",
"imshow(torchvision.utils.make_grid(images))\n",
"print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, the next cell runs scores the above images using the deployed model service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"input_payload = json.dumps({\n",
" 'data': images.tolist()\n",
"})\n",
"\n",
"output = service.run(input_payload)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up resources\n",
"\n",
"To clean up the resources after this quickstart, firstly delete the Model service using:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next stop the compute instance by following these steps:\n",
"\n",
"1. Go to **Compute** in the left-hand menu of the Azure Machine Learning studio\n",
"1. Select your compute instance\n",
"1. Select **Stop**\n",
"\n",
"\n",
"**Important: The resources you created can be used as prerequisites to other Azure Machine Learning tutorials and how-to articles.** If you don't plan to use the resources you created, delete them, so you don't incur any charges:\n",
"\n",
"1. In the Azure portal, select **Resource groups** on the far left.\n",
"1. From the list, select the resource group you created.\n",
"1. Select **Delete resource group**.\n",
"1. Enter the resource group name. Then select **Delete**.\n",
"\n",
"You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next Steps\n",
"\n",
"In this tutorial, you have seen how to run your machine learning code on a fully managed, pre-configured cloud-based VM called a *compute instance*. Having a compute instance for your development environment removes the burden of installing data science tooling and libraries (for example, Jupyter, PyTorch, TensorFlow, Scikit) and allows you to easily scale up/down the compute power (RAM, cores) since the notebooks are separated from the VM. \n",
"\n",
"It is often the case that once you have your machine learning code working in a development environment that you want to productionize this by running as a **_job_** - ideally on a schedule or trigger (for example, arrival of new data). To this end, we recommend that you follow [**the day 1 getting started with Azure Machine Learning tutorial**](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local). This day 1 tutorial is focussed on running jobs-based machine learning code in the cloud."
]
}
],
"metadata": {
"authors": [
{
"name": "samkemp"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"nteract": {
"version": "nteract-front-end@1.0.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,7 @@
name: azureml-quickstart
dependencies:
- pip:
- azureml-sdk
- pytorch
- torchvision
- matplotlib

View File

@@ -0,0 +1,22 @@
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x