Compare commits

...

29 Commits

Author SHA1 Message Date
amlrelsa-ms
a6817063df update samples from Release-80 as a part of SDK release 2020-12-12 00:45:42 +00:00
Harneet Virk
a79f8c254a Merge pull request #1255 from Azure/release_update/Release-79
update samples from Release-79 as a part of  SDK release
2020-12-07 11:11:32 -08:00
amlrelsa-ms
fb4f287458 update samples from Release-79 as a part of SDK release 2020-12-07 19:09:59 +00:00
Harneet Virk
41366a4af0 Merge pull request #1238 from Azure/release_update/Release-78
update samples from Release-78 as a part of  SDK release
2020-11-11 13:00:22 -08:00
amlrelsa-ms
74deb14fac update samples from Release-78 as a part of SDK release 2020-11-11 19:32:32 +00:00
Harneet Virk
4ed1d445ae Merge pull request #1236 from Azure/release_update/Release-77
update samples from Release-77 as a part of  SDK release
2020-11-10 10:52:23 -08:00
amlrelsa-ms
b5c15db0b4 update samples from Release-77 as a part of SDK release 2020-11-10 18:46:23 +00:00
Harneet Virk
91d43bade6 Merge pull request #1235 from Azure/release_update_stablev2/Release-44
update samples from Release-44 as a part of 1.18.0 SDK stable release
2020-11-10 08:52:24 -08:00
amlrelsa-ms
bd750f5817 update samples from Release-44 as a part of 1.18.0 SDK stable release 2020-11-10 03:42:03 +00:00
mx-iao
637bcc5973 Merge pull request #1229 from Azure/lostmygithubaccount-patch-3
Update README.md
2020-11-03 15:18:37 -10:00
Cody
ba741fb18d Update README.md 2020-11-03 17:16:28 -08:00
Harneet Virk
ac0ad8d487 Merge pull request #1228 from Azure/release_update/Release-76
update samples from Release-76 as a part of  SDK release
2020-11-03 16:12:15 -08:00
amlrelsa-ms
5019ad6c5a update samples from Release-76 as a part of SDK release 2020-11-03 22:31:02 +00:00
Cody
41a2ebd2b3 Merge pull request #1226 from Azure/lostmygithubaccount-patch-3
Update README.md
2020-11-03 11:25:10 -08:00
Cody
53e3283d1d Update README.md 2020-11-03 11:17:41 -08:00
Harneet Virk
ba9c4c5465 Merge pull request #1225 from Azure/release_update/Release-75
update samples from Release-75 as a part of  SDK release
2020-11-03 11:11:11 -08:00
amlrelsa-ms
a6c65f00ec update samples from Release-75 as a part of SDK release 2020-11-03 19:07:12 +00:00
Cody
95072eabc2 Merge pull request #1221 from Azure/lostmygithubaccount-patch-2
Update README.md
2020-11-02 11:52:05 -08:00
Cody
12905ef254 Update README.md 2020-11-02 06:59:44 -08:00
Harneet Virk
4cf56eee91 Merge pull request #1217 from Azure/release_update/Release-74
update samples from Release-74 as a part of  SDK release
2020-10-30 17:27:02 -07:00
amlrelsa-ms
d345ff6c37 update samples from Release-74 as a part of SDK release 2020-10-30 22:20:10 +00:00
Harneet Virk
560dcac0a0 Merge pull request #1214 from Azure/release_update/Release-73
update samples from Release-73 as a part of  SDK release
2020-10-29 23:38:02 -07:00
amlrelsa-ms
322087a58c update samples from Release-73 as a part of SDK release 2020-10-30 06:37:05 +00:00
Harneet Virk
e255c000ab Merge pull request #1211 from Azure/release_update/Release-72
update samples from Release-72 as a part of  SDK release
2020-10-28 14:30:50 -07:00
amlrelsa-ms
7871e37ec0 update samples from Release-72 as a part of SDK release 2020-10-28 21:24:40 +00:00
Cody
58e584e7eb Update README.md (#1209) 2020-10-27 21:00:38 -04:00
Harneet Virk
1b0d75cb45 Merge pull request #1206 from Azure/release_update/Release-71
update samples from Release-71 as a part of  SDK 1.17.0 release
2020-10-26 22:29:48 -07:00
amlrelsa-ms
5c38272fb4 update samples from Release-71 as a part of SDK release 2020-10-27 04:11:39 +00:00
Harneet Virk
e026c56f19 Merge pull request #1200 from Azure/cody/add-new-repo-link
update readme
2020-10-22 10:50:03 -07:00
69 changed files with 2252 additions and 787 deletions

View File

@@ -28,7 +28,7 @@ git clone https://github.com/Azure/MachineLearningNotebooks.git
pip install azureml-sdk[notebooks,tensorboard]
# install model explainability component
pip install azureml-sdk[explain]
pip install azureml-sdk[interpret]
# install automated ml components
pip install azureml-sdk[automl]
@@ -86,7 +86,7 @@ If you need additional Azure ML SDK components, you can either modify the Docker
pip install azureml-sdk[automl]
# install the core SDK and model explainability component
pip install azureml-sdk[explain]
pip install azureml-sdk[interpret]
# install the core SDK and experimental components
pip install azureml-sdk[contrib]

View File

@@ -2,7 +2,7 @@
> a community-driven repository of examples using mlflow for tracking can be found at https://github.com/Azure/azureml-examples
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
![Azure ML Workflow](https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/media/concept-azure-machine-learning-architecture/workflow.png)
@@ -20,10 +20,10 @@ This [index](./index.md) should assist in navigating the Azure Machine Learning
If you want to...
* ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb).
* ...learn about experimentation and tracking run history, try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb).
* ...train deep learning models at scale, learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb)
* ...deploy models as a realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
* ...deploy models as a batch scoring service, [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring).
* ...learn about experimentation and tracking run history: [track and monitor experiments](./how-to-use-azureml/track-and-monitor-experiments).
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
* ...deploy models as a realtime scoring service, first learn the basics by [deploying to Azure Container Instance](./how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.ipynb), then learn how to [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
* ...deploy models as a batch scoring service: [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb) and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring).
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb).
## Tutorials
@@ -35,6 +35,7 @@ The [Tutorials](./tutorials) folder contains notebooks for the tutorials describ
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
- [Training with ML and DL frameworks](./how-to-use-azureml/ml-frameworks) - Examples demonstrating how to build and train machine learning models at scale on Azure ML and perform hyperparameter tuning.
- [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
- [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
@@ -58,7 +59,6 @@ Visit this [community repository](https://github.com/microsoft/MLOps/tree/master
## Projects using Azure Machine Learning
Visit following repos to see projects contributed by Azure ML users:
- [AML Examples](https://github.com/Azure/azureml-examples)
- [Learn about Natural Language Processing best practices using Azure Machine Learning service](https://github.com/microsoft/nlp)
- [Pre-Train BERT models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)

View File

@@ -103,7 +103,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -38,7 +38,7 @@
"## Introduction\n",
"This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).\n",
"\n",
"We will apply the [grid search algorithm](https://fairlearn.github.io/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n",
"We will apply the [grid search algorithm](https://fairlearn.github.io/master/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n",
"\n",
"### Setup\n",
"\n",
@@ -98,8 +98,11 @@
"metadata": {},
"outputs": [],
"source": [
"from sklearn.datasets import fetch_openml\n",
"data = fetch_openml(data_id=1590, as_frame=True)\n",
"from utilities import fetch_openml_with_retries\n",
"\n",
"data = fetch_openml_with_retries(data_id=1590)\n",
" \n",
"# Extract the items we want\n",
"X_raw = data.data\n",
"Y = (data.target == '>50K') * 1\n",
"\n",

View File

@@ -98,8 +98,11 @@
"metadata": {},
"outputs": [],
"source": [
"from sklearn.datasets import fetch_openml\n",
"data = fetch_openml(data_id=1590, as_frame=True)\n",
"from utilities import fetch_openml_with_retries\n",
"\n",
"data = fetch_openml_with_retries(data_id=1590)\n",
" \n",
"# Extract the items we want\n",
"X_raw = data.data\n",
"Y = (data.target == '>50K') * 1"
]

View File

@@ -0,0 +1,28 @@
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
"""Utilities for azureml-contrib-fairness notebooks."""
from sklearn.datasets import fetch_openml
import time
def fetch_openml_with_retries(data_id, max_retries=4, retry_delay=60):
"""Fetch a given dataset from OpenML with retries as specified."""
for i in range(max_retries):
try:
print("Download attempt {0} of {1}".format(i + 1, max_retries))
data = fetch_openml(data_id=data_id, as_frame=True)
break
except Exception as e:
print("Download attempt failed with exception:")
print(e)
if i + 1 != max_retries:
print("Will retry after {0} seconds".format(retry_delay))
time.sleep(retry_delay)
retry_delay = retry_delay * 2
else:
raise RuntimeError("Unable to download dataset from OpenML")
return data

View File

@@ -4,6 +4,7 @@ Learn how to use Azure Machine Learning services for experimentation and model m
As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order.
* [train-within-notebook](./training/train-within-notebook): Train a model while tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
* [train-on-local](./training/train-on-local): Learn how to submit a run to local computer and use Azure ML managed run configuration.
* [train-on-amlcompute](./training/train-on-amlcompute): Use a 1-n node Azure ML managed compute cluster for remote runs on Azure CPU or GPU infrastructure.
* [train-on-remote-vm](./training/train-on-remote-vm): Use Data Science Virtual Machine as a target for remote runs.

View File

@@ -97,62 +97,96 @@ jupyter notebook
<a name="databricks"></a>
## Setup using Azure Databricks
**NOTE**: Please create your Azure Databricks cluster as v6.0 (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: Please create your Azure Databricks cluster as v7.1 (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl]** as a PyPi library in Azure Databricks workspace.
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks).
- Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace.
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl).
- Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl) and import into the Azure databricks workspace.
- Attach the notebook to the cluster.
<a name="samples"></a>
# Automated ML SDK Sample Notebooks
- [auto-ml-classification-credit-card-fraud.ipynb](classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb)
- Dataset: Kaggle's [credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
- Simple example of using automated ML for classification to fraudulent credit card transactions
- Uses azure compute for training
## Classification
- **Classify Credit Card Fraud**
- Dataset: [Kaggle's credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
- **[Jupyter Notebook (remote run)](classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb)**
- run the experiment remotely on AML Compute cluster
- test the performance of the best model in the local environment
- **[Jupyter Notebook (local run)](local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb)**
- run experiment in the local environment
- use Mimic Explainer for computing feature importance
- deploy the best model along with the explainer to an Azure Kubernetes (AKS) cluster, which will compute the raw and engineered feature importances at inference time
- **Predict Term Deposit Subscriptions in a Bank**
- Dataset: [UCI's bank marketing dataset](https://www.kaggle.com/janiobachmann/bank-marketing-dataset)
- **[Jupyter Notebook](classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb)**
- run experiment remotely on AML Compute cluster to generate ONNX compatible models
- view the featurization steps that were applied during training
- view feature importance for the best model
- download the best model in ONNX format and use it for inferencing using ONNXRuntime
- deploy the best model in PKL format to Azure Container Instance (ACI)
- **Predict Newsgroup based on Text from News Article**
- Dataset: [20 newsgroups text dataset](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html)
- **[Jupyter Notebook](classification-text-dnn/auto-ml-classification-text-dnn.ipynb)**
- AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data
- AutoML will use Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used
- Bidirectional Long-Short Term neural network (BiLSTM) will be utilized when a CPU compute is used, thereby optimizing the choice of DNN
- [auto-ml-regression.ipynb](regression/auto-ml-regression.ipynb)
## Regression
- **Predict Performance of Hardware Parts**
- Dataset: Hardware Performance Dataset
- Simple example of using automated ML for regression
- Uses azure compute for training
- **[Jupyter Notebook](regression/auto-ml-regression.ipynb)**
- run the experiment remotely on AML Compute cluster
- get best trained model for a different metric than the one the experiment was optimized for
- test the performance of the best model in the local environment
- **[Jupyter Notebook (advanced)](regression/auto-ml-regression.ipynb)**
- run the experiment remotely on AML Compute cluster
- customize featurization: override column purpose within the dataset, configure transformer parameters
- get best trained model for a different metric than the one the experiment was optimized for
- run a model explanation experiment on the remote cluster
- deploy the model along the explainer and run online inferencing
- [auto-ml-regression-explanation-featurization.ipynb](regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb)
- Dataset: Hardware Performance Dataset
- Shows featurization and excplanation
- Uses azure compute for training
- [auto-ml-forecasting-energy-demand.ipynb](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
- Dataset: [NYC energy demand data](forecasting-a/nyc_energy.csv)
- Example of using automated ML for training a forecasting model
- [auto-ml-classification-credit-card-fraud-local.ipynb](local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb)
- Dataset: Kaggle's [credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
- Simple example of using automated ML for classification to fraudulent credit card transactions
- Uses local compute for training
- [auto-ml-classification-bank-marketing-all-features.ipynb](classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb)
- Dataset: UCI's [bank marketing dataset](https://www.kaggle.com/janiobachmann/bank-marketing-dataset)
- Simple example of using automated ML for classification to predict term deposit subscriptions for a bank
- Uses azure compute for training
- [auto-ml-forecasting-orange-juice-sales.ipynb](forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb)
- Dataset: [Dominick's grocery sales of orange juice](forecasting-b/dominicks_OJ.csv)
- Example of training an automated ML forecasting model on multiple time-series
- [auto-ml-forecasting-bike-share.ipynb](forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
- Dataset: forecasting for a bike-sharing
- Example of training an automated ML forecasting model on multiple time-series
- [auto-ml-forecasting-function.ipynb](forecasting-forecast-function/auto-ml-forecasting-function.ipynb)
- Example of training an automated ML forecasting model on multiple time-series
- [auto-ml-forecasting-beer-remote.ipynb](forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb)
- Example of training an automated ML forecasting model on multiple time-series
- Beer Production Forecasting
- [auto-ml-continuous-retraining.ipynb](continuous-retraining/auto-ml-continuous-retraining.ipynb)
- Continuous retraining using Pipelines and Time-Series TabularDataset
## Time Series Forecasting
- **Forecast Energy Demand**
- Dataset: [NYC energy demand data](http://mis.nyiso.com/public/P-58Blist.htm)
- **[Jupyter Notebook](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)**
- run experiment remotely on AML Compute cluster
- use lags and rolling window features
- view the featurization steps that were applied during training
- get the best model, use it to forecast on test data and compare the accuracy of predictions against real data
- **Forecast Orange Juice Sales (Multi-Series)**
- Dataset: [Dominick's grocery sales of orange juice](forecasting-orange-juice-sales/dominicks_OJ.csv)
- **[Jupyter Notebook](forecasting-orange-juice-sales/dominicks_OJ.csv)**
- run experiment remotely on AML Compute cluster
- customize time-series featurization, change column purpose and override transformer hyper parameters
- evaluate locally the performance of the generated best model
- deploy the best model as a webservice on Azure Container Instance (ACI)
- get online predictions from the deployed model
- **Forecast Demand of a Bike-Sharing Service**
- Dataset: [Bike demand data](forecasting-bike-share/bike-no.csv)
- **[Jupyter Notebook](forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)**
- run experiment remotely on AML Compute cluster
- integrate holiday features
- run rolling forecast for test set that is longer than the forecast horizon
- compute metrics on the predictions from the remote forecast
- **The Forecast Function Interface**
- Dataset: Generated for sample purposes
- **[Jupyter Notebook](forecasting-forecast-function/auto-ml-forecasting-function.ipynb)**
- train a forecaster using a remote AML Compute cluster
- capabilities of forecast function (e.g. forecast farther into the horizon)
- generate confidence intervals
- **Forecast Beverage Production**
- Dataset: [Monthly beer production data](forecasting-beer-remote/Beer_no_valid_split_train.csv)
- **[Jupyter Notebook](forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb)**
- train using a remote AML Compute cluster
- enable the DNN learning model
- forecast on a remote compute cluster and compare different model performance
- **Continuous Retraining with NOAA Weather Data**
- Dataset: [NOAA weather data from Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/)
- **[Jupyter Notebook](continuous-retraining/auto-ml-continuous-retraining.ipynb)**
- continuously retrain a model using Pipelines and AutoML
- create a Pipeline to upload a time series dataset to an Azure blob
- create a Pipeline to run an AutoML experiment and register the best resulting model in the Workspace
- publish the training pipeline created and schedule it to run daily
<a name="documentation"></a>
See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments.

View File

@@ -3,13 +3,14 @@ dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- pip<=19.3.1
- python>=3.5.2,<3.6.8
- python>=3.5.2,<3.8
- nb_conda
- boto3==1.15.18
- matplotlib==2.1.0
- numpy==1.18.5
- cython
- urllib3<1.24
- scipy==1.4.1
- scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1
- pandas==0.25.1
- py-xgboost<=0.90
@@ -20,9 +21,8 @@ dependencies:
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets
- azureml-widgets~=1.19.0
- pytorch-transformers==1.0.0
- spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.16.0/validated_win32_requirements.txt [--no-deps]
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.19.0/validated_win32_requirements.txt [--no-deps]

View File

@@ -3,13 +3,14 @@ dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- pip<=19.3.1
- python>=3.5.2,<3.6.8
- python>=3.5.2,<3.8
- nb_conda
- boto3==1.15.18
- matplotlib==2.1.0
- numpy==1.18.5
- cython
- urllib3<1.24
- scipy==1.4.1
- scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1
- pandas==0.25.1
- py-xgboost<=0.90
@@ -20,9 +21,9 @@ dependencies:
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets
- azureml-widgets~=1.19.0
- pytorch-transformers==1.0.0
- spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.16.0/validated_linux_requirements.txt [--no-deps]
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.19.0/validated_linux_requirements.txt [--no-deps]

View File

@@ -4,13 +4,14 @@ dependencies:
# Currently Azure ML only supports 3.5.2 and later.
- pip<=19.3.1
- nomkl
- python>=3.5.2,<3.6.8
- python>=3.5.2,<3.8
- nb_conda
- boto3==1.15.18
- matplotlib==2.1.0
- numpy==1.18.5
- cython
- urllib3<1.24
- scipy==1.4.1
- scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1
- pandas==0.25.1
- py-xgboost<=0.90
@@ -21,8 +22,8 @@ dependencies:
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets
- azureml-widgets~=1.19.0
- pytorch-transformers==1.0.0
- spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.16.0/validated_darwin_requirements.txt [--no-deps]
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.19.0/validated_darwin_requirements.txt [--no-deps]

View File

@@ -105,7 +105,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -899,7 +899,7 @@
"metadata": {
"authors": [
{
"name": "anumamah"
"name": "ratanase"
}
],
"category": "tutorial",

View File

@@ -93,7 +93,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -450,7 +450,7 @@
"metadata": {
"authors": [
{
"name": "tzvikei"
"name": "ratanase"
}
],
"category": "tutorial",

View File

@@ -0,0 +1,589 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Text Classification Using Deep Learning**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Evaluate](#Evaluate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"This notebook demonstrates classification with text data using deep learning in AutoML.\n",
"\n",
"AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data. Depending on the compute cluster the user provides, AutoML tried out Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used, and Bidirectional Long-Short Term neural network (BiLSTM) when a CPU compute is used, thereby optimizing the choice of DNN for the uesr's setup.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"Notebook synopsis:\n",
"\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification\n",
"3. Registering the best model for future use\n",
"4. Evaluating the final model on a test set"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import shutil\n",
"\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.run import Run\n",
"from azureml.widgets import RunDetails\n",
"from azureml.core.model import Model \n",
"from helper import run_inference, get_result_df\n",
"from azureml.train.automl import AutoMLConfig\n",
"from sklearn.datasets import fetch_20newsgroups"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose an experiment name.\n",
"experiment_name = 'automl-classification-text-dnn'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up a compute cluster\n",
"This section uses a user-provided compute cluster (named \"dnntext-cluster\" in this example). If a cluster with this name does not exist in the user's workspace, the below code will create a new cluster. You can choose the parameters of the cluster as mentioned in the comments.\n",
"\n",
"Whether you provide/select a CPU or GPU cluster, AutoML will choose the appropriate DNN for that setup - BiLSTM or BERT text featurizer will be included in the candidate featurizers on CPU and GPU respectively. If your goal is to obtain the most accurate model, we recommend you use GPU clusters since BERT featurizers usually outperform BiLSTM featurizers."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"num_nodes = 2\n",
"\n",
"# Choose a name for your cluster.\n",
"amlcompute_cluster_name = \"dnntext-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_NC6\", # CPU for BiLSTM, such as \"STANDARD_D2_V2\" \n",
" # To use BERT (this is recommended for best performance), select a GPU such as \"STANDARD_NC6\" \n",
" # or similar GPU option\n",
" # available in your workspace\n",
" max_nodes = num_nodes)\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get data\n",
"For this notebook we will use 20 Newsgroups data from scikit-learn. We filter the data to contain four classes and take a sample as training data. Please note that for accuracy improvement, more data is needed. For this notebook we provide a small-data example so that you can use this template to use with your larger sized data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_dir = \"text-dnn-data\" # Local directory to store data\n",
"blobstore_datadir = data_dir # Blob store directory to store data in\n",
"target_column_name = 'y'\n",
"feature_column_name = 'X'\n",
"\n",
"def get_20newsgroups_data():\n",
" '''Fetches 20 Newsgroups data from scikit-learn\n",
" Returns them in form of pandas dataframes\n",
" '''\n",
" remove = ('headers', 'footers', 'quotes')\n",
" categories = [\n",
" 'rec.sport.baseball',\n",
" 'rec.sport.hockey',\n",
" 'comp.graphics',\n",
" 'sci.space',\n",
" ]\n",
"\n",
" data = fetch_20newsgroups(subset = 'train', categories = categories,\n",
" shuffle = True, random_state = 42,\n",
" remove = remove)\n",
" data = pd.DataFrame({feature_column_name: data.data, target_column_name: data.target})\n",
"\n",
" data_train = data[:200]\n",
" data_test = data[200:300] \n",
"\n",
" data_train = remove_blanks_20news(data_train, feature_column_name, target_column_name)\n",
" data_test = remove_blanks_20news(data_test, feature_column_name, target_column_name)\n",
" \n",
" return data_train, data_test\n",
" \n",
"def remove_blanks_20news(data, feature_column_name, target_column_name):\n",
" \n",
" data[feature_column_name] = data[feature_column_name].replace(r'\\n', ' ', regex=True).apply(lambda x: x.strip())\n",
" data = data[data[feature_column_name] != '']\n",
" \n",
" return data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Fetch data and upload to datastore for use in training"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_train, data_test = get_20newsgroups_data()\n",
"\n",
"if not os.path.isdir(data_dir):\n",
" os.mkdir(data_dir)\n",
" \n",
"train_data_fname = data_dir + '/train_data.csv'\n",
"test_data_fname = data_dir + '/test_data.csv'\n",
"\n",
"data_train.to_csv(train_data_fname, index=False)\n",
"data_test.to_csv(test_data_fname, index=False)\n",
"\n",
"datastore = ws.get_default_datastore()\n",
"datastore.upload(src_dir=data_dir, target_path=blobstore_datadir,\n",
" overwrite=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/train_data.csv')])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare AutoML run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook uses the blocked_models parameter to exclude some models that can take a longer time to train on some text datasets. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"experiment_timeout_minutes\": 20,\n",
" \"primary_metric\": 'accuracy',\n",
" \"max_concurrent_iterations\": num_nodes, \n",
" \"max_cores_per_iteration\": -1,\n",
" \"enable_dnn\": True,\n",
" \"enable_early_stopping\": True,\n",
" \"validation_size\": 0.3,\n",
" \"verbosity\": logging.INFO,\n",
" \"enable_voting_ensemble\": False,\n",
" \"enable_stack_ensemble\": False,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" compute_target=compute_target,\n",
" training_data=train_dataset,\n",
" label_column_name=target_column_name,\n",
" blocked_models = ['LightGBM', 'XGBoostClassifier'],\n",
" **automl_settings\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit AutoML Run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"Below we select the best model pipeline from our iterations, use it to test on test data on the same compute cluster."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can test the model locally to get a feel of the input/output. When the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your MachineLearningNotebooks folder here:\n",
"MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl_env.yml"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = automl_run.get_output()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now see what text transformations are used to convert text data to features for this dataset, including deep learning transformations based on BiLSTM or Transformer (BERT is one implementation of a Transformer) models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text_transformations_used = []\n",
"for column_group in fitted_model.named_steps['datatransformer'].get_featurization_summary():\n",
" text_transformations_used.extend(column_group['Transformations'])\n",
"text_transformations_used"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Registering the best model\n",
"We now register the best fitted model from the AutoML Run for use in future deployments. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get results stats, extract the best model from AutoML run, download and register the resultant best model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"summary_df = get_result_df(automl_run)\n",
"best_dnn_run_id = summary_df['run_id'].iloc[0]\n",
"best_dnn_run = Run(experiment, best_dnn_run_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_dir = 'Model' # Local folder where the model will be stored temporarily\n",
"if not os.path.isdir(model_dir):\n",
" os.mkdir(model_dir)\n",
" \n",
"best_dnn_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Register the model in your Azure Machine Learning Workspace. If you previously registered a model, please make sure to delete it so as to replace it with this new model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Register the model\n",
"model_name = 'textDNN-20News'\n",
"model = Model.register(model_path = model_dir + '/model.pkl',\n",
" model_name = model_name,\n",
" tags=None,\n",
" workspace=ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Evaluate on Test Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now use the best fitted model from the AutoML Run to make predictions on the test set. \n",
"\n",
"Test set schema should match that of the training set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/test_data.csv')])\n",
"\n",
"# preview the first 3 rows of the dataset\n",
"test_dataset.take(3).to_pandas_dataframe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_experiment = Experiment(ws, experiment_name + \"_test\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"script_folder = os.path.join(os.getcwd(), 'inference')\n",
"os.makedirs(script_folder, exist_ok=True)\n",
"shutil.copy('infer.py', script_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_run = run_inference(test_experiment, compute_target, script_folder, best_dnn_run,\n",
" train_dataset, test_dataset, target_column_name, model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Display computed metrics"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"RunDetails(test_run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_run.wait_for_completion()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pd.Series(test_run.get_metrics())"
]
}
],
"metadata": {
"authors": [
{
"name": "anshirga"
}
],
"compute": [
"AML Compute"
],
"datasets": [
"None"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"framework": [
"None"
],
"friendly_name": "DNN Text Featurization",
"index_order": 2,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
},
"tags": [
"None"
],
"task": "Text featurization using DNNs for classification"
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,4 @@
name: auto-ml-classification-text-dnn
dependencies:
- pip:
- azureml-sdk

View File

@@ -0,0 +1,56 @@
import pandas as pd
from azureml.core import Environment
from azureml.train.estimator import Estimator
from azureml.core.run import Run
def run_inference(test_experiment, compute_target, script_folder, train_run,
train_dataset, test_dataset, target_column_name, model_name):
inference_env = train_run.get_environment()
est = Estimator(source_directory=script_folder,
entry_script='infer.py',
script_params={
'--target_column_name': target_column_name,
'--model_name': model_name
},
inputs=[
train_dataset.as_named_input('train_data'),
test_dataset.as_named_input('test_data')
],
compute_target=compute_target,
environment_definition=inference_env)
run = test_experiment.submit(
est, tags={
'training_run_id': train_run.id,
'run_algorithm': train_run.properties['run_algorithm'],
'valid_score': train_run.properties['score'],
'primary_metric': train_run.properties['primary_metric']
})
run.log("run_algorithm", run.tags['run_algorithm'])
return run
def get_result_df(remote_run):
children = list(remote_run.get_children(recursive=True))
summary_df = pd.DataFrame(index=['run_id', 'run_algorithm',
'primary_metric', 'Score'])
goal_minimize = False
for run in children:
if('run_algorithm' in run.properties and 'score' in run.properties):
summary_df[run.id] = [run.id, run.properties['run_algorithm'],
run.properties['primary_metric'],
float(run.properties['score'])]
if('goal' in run.properties):
goal_minimize = run.properties['goal'].split('_')[-1] == 'min'
summary_df = summary_df.T.sort_values(
'Score',
ascending=goal_minimize).drop_duplicates(['run_algorithm'])
summary_df = summary_df.set_index('run_algorithm')
return summary_df

View File

@@ -0,0 +1,60 @@
import argparse
import numpy as np
from sklearn.externals import joblib
from azureml.automl.runtime.shared.score import scoring, constants
from azureml.core import Run
from azureml.core.model import Model
parser = argparse.ArgumentParser()
parser.add_argument(
'--target_column_name', type=str, dest='target_column_name',
help='Target Column Name')
parser.add_argument(
'--model_name', type=str, dest='model_name',
help='Name of registered model')
args = parser.parse_args()
target_column_name = args.target_column_name
model_name = args.model_name
print('args passed are: ')
print('Target column name: ', target_column_name)
print('Name of registered model: ', model_name)
model_path = Model.get_model_path(model_name)
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
run = Run.get_context()
# get input dataset by name
test_dataset = run.input_datasets['test_data']
train_dataset = run.input_datasets['train_data']
X_test_df = test_dataset.drop_columns(columns=[target_column_name]) \
.to_pandas_dataframe()
y_test_df = test_dataset.with_timestamp_columns(None) \
.keep_columns(columns=[target_column_name]) \
.to_pandas_dataframe()
y_train_df = test_dataset.with_timestamp_columns(None) \
.keep_columns(columns=[target_column_name]) \
.to_pandas_dataframe()
predicted = model.predict_proba(X_test_df)
# Use the AutoML scoring module
class_labels = np.unique(np.concatenate((y_train_df.values, y_test_df.values)))
train_labels = model.classes_
classification_metrics = list(constants.CLASSIFICATION_SCALAR_SET)
scores = scoring.score_classification(y_test_df.values, predicted,
classification_metrics,
class_labels, train_labels)
print("scores:")
print(scores)
for key, value in scores.items():
run.log(key, value)

View File

@@ -32,13 +32,6 @@
"8. [Test Retraining](#Test-Retraining)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -88,7 +81,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -550,7 +543,7 @@
"metadata": {
"authors": [
{
"name": "anshirga"
"name": "vivijay"
}
],
"kernelspec": {

View File

@@ -68,6 +68,7 @@
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import json\n",
"import numpy as np\n",
"import pandas as pd\n",
" \n",
@@ -92,7 +93,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -138,7 +139,8 @@
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"reg-cluster\"\n",
"# Try to ensure that the cluster name is unique across the notebooks\n",
"cpu_cluster_name = \"reg-model-proxy\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
@@ -197,6 +199,7 @@
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**training_data**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**label_column_name**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
"|**scenario**|We need to set this parameter to 'Latest' to enable some experimental features. This parameter should not be set outside of this experimental notebook.|\n",
"\n",
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
]
@@ -225,6 +228,7 @@
" compute_target = compute_target,\n",
" training_data = train_data,\n",
" label_column_name = label,\n",
" scenario='Latest',\n",
" **automl_settings\n",
" )"
]
@@ -321,6 +325,24 @@
"print(best_run)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Show hyperparameters\n",
"Show the model pipeline used for the best run with its hyperparameters."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run_properties = json.loads(best_run.get_details()['properties']['pipeline_script'])\n",
"print(json.dumps(run_properties, indent = 1)) "
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -451,7 +473,7 @@
"metadata": {
"authors": [
{
"name": "rakellam"
"name": "sekrupa"
}
],
"categories": [

View File

@@ -54,9 +54,8 @@
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)\n",
"\n",
"Notebook synopsis:\n",
"\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Configuration and remote run of AutoML for a time-series model exploring Regression learners, Arima, Prophet and DNNs\n",
"4. Evaluating the fitted model using a rolling test "
@@ -114,7 +113,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -350,9 +349,7 @@
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**training_data**|Input dataset, containing both features and label column.|\n",
"|**label_column_name**|The name of the label column.|\n",
"|**enable_dnn**|Enable Forecasting DNNs|\n",
"\n",
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)."
"|**enable_dnn**|Enable Forecasting DNNs|\n"
]
},
{
@@ -650,7 +647,7 @@
"metadata": {
"authors": [
{
"name": "omkarm"
"name": "jialiu"
}
],
"hide_code_all_hidden": false,

View File

@@ -3,11 +3,11 @@ from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.train.estimator import Estimator
from azureml.core.run import Run
from azureml.automl.core.shared import constants
def split_fraction_by_grain(df, fraction, time_column_name,
grain_column_names=None):
if not grain_column_names:
df['tmp_grain_column'] = 'grain'
grain_column_names = ['tmp_grain_column']
@@ -17,10 +17,10 @@ def split_fraction_by_grain(df, fraction, time_column_name,
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-int(len(dfg) *
fraction)] if fraction > 0 else dfg)
fraction)] if fraction > 0 else dfg)
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-int(len(dfg) *
fraction):] if fraction > 0 else dfg[:0])
fraction):] if fraction > 0 else dfg[:0])
if 'tmp_grain_column' in grain_column_names:
for df2 in (df, df_head, df_tail):
@@ -59,11 +59,13 @@ def get_result_df(remote_run):
'primary_metric', 'Score'])
goal_minimize = False
for run in children:
if('run_algorithm' in run.properties and 'score' in run.properties):
if run.get_status().lower() == constants.RunState.COMPLETE_RUN \
and 'run_algorithm' in run.properties and 'score' in run.properties:
# We only count in the completed child runs.
summary_df[run.id] = [run.id, run.properties['run_algorithm'],
run.properties['primary_metric'],
float(run.properties['score'])]
if('goal' in run.properties):
if ('goal' in run.properties):
goal_minimize = run.properties['goal'].split('_')[-1] == 'min'
summary_df = summary_df.T.sort_values(
@@ -118,7 +120,6 @@ def run_multiple_inferences(summary_df, train_experiment, test_experiment,
compute_target, script_folder, test_dataset,
lookback_dataset, max_horizon, target_column_name,
time_column_name, freq):
for run_name, run_summary in summary_df.iterrows():
print(run_name)
print(run_summary)

View File

@@ -1,4 +1,5 @@
import argparse
import os
import numpy as np
import pandas as pd
@@ -10,6 +11,13 @@ from sklearn.metrics import mean_absolute_error, mean_squared_error
from azureml.automl.runtime.shared.score import scoring, constants
from azureml.core import Run
try:
import torch
_torch_present = True
except ImportError:
_torch_present = False
def align_outputs(y_predicted, X_trans, X_test, y_test,
predicted_column_name='predicted',
@@ -48,7 +56,7 @@ def align_outputs(y_predicted, X_trans, X_test, y_test,
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name,
predicted_column_name]].notnull().all(axis=1)]
return(clean)
return (clean)
def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
@@ -83,8 +91,7 @@ def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
if origin_time != X[time_column_name].min():
# Set the context by including actuals up-to the origin time
test_context_expand_wind = (X[time_column_name] < origin_time)
context_expand_wind = (
X_test_expand[time_column_name] < origin_time)
context_expand_wind = (X_test_expand[time_column_name] < origin_time)
y_query_expand[context_expand_wind] = y[test_context_expand_wind]
# Print some debug info
@@ -115,8 +122,7 @@ def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
# Align forecast with test set for dates within
# the current rolling window
trans_tindex = X_trans.index.get_level_values(time_column_name)
trans_roll_wind = (trans_tindex >= origin_time) & (
trans_tindex < horizon_time)
trans_roll_wind = (trans_tindex >= origin_time) & (trans_tindex < horizon_time)
test_roll_wind = expand_wind & (X[time_column_name] >= origin_time)
df_list.append(align_outputs(
y_fcst[trans_roll_wind], X_trans[trans_roll_wind],
@@ -155,8 +161,7 @@ def do_rolling_forecast(fitted_model, X_test, y_test, max_horizon, freq='D'):
if origin_time != X_test[time_column_name].min():
# Set the context by including actuals up-to the origin time
test_context_expand_wind = (X_test[time_column_name] < origin_time)
context_expand_wind = (
X_test_expand[time_column_name] < origin_time)
context_expand_wind = (X_test_expand[time_column_name] < origin_time)
y_query_expand[context_expand_wind] = y_test[
test_context_expand_wind]
@@ -186,10 +191,8 @@ def do_rolling_forecast(fitted_model, X_test, y_test, max_horizon, freq='D'):
# Align forecast with test set for dates within the
# current rolling window
trans_tindex = X_trans.index.get_level_values(time_column_name)
trans_roll_wind = (trans_tindex >= origin_time) & (
trans_tindex < horizon_time)
test_roll_wind = expand_wind & (
X_test[time_column_name] >= origin_time)
trans_roll_wind = (trans_tindex >= origin_time) & (trans_tindex < horizon_time)
test_roll_wind = expand_wind & (X_test[time_column_name] >= origin_time)
df_list.append(align_outputs(y_fcst[trans_roll_wind],
X_trans[trans_roll_wind],
X_test[test_roll_wind],
@@ -221,6 +224,10 @@ def MAPE(actual, pred):
return np.mean(APE(actual_safe, pred_safe))
def map_location_cuda(storage, loc):
return storage.cuda()
parser = argparse.ArgumentParser()
parser.add_argument(
'--max_horizon', type=int, dest='max_horizon',
@@ -238,7 +245,6 @@ parser.add_argument(
'--model_path', type=str, dest='model_path',
default='model.pkl', help='Filename of model to be loaded')
args = parser.parse_args()
max_horizon = args.max_horizon
target_column_name = args.target_column_name
@@ -246,7 +252,6 @@ time_column_name = args.time_column_name
freq = args.freq
model_path = args.model_path
print('args passed are: ')
print(max_horizon)
print(target_column_name)
@@ -274,8 +279,19 @@ X_lookback_df = lookback_dataset.drop_columns(columns=[target_column_name])
y_lookback_df = lookback_dataset.with_timestamp_columns(
None).keep_columns(columns=[target_column_name])
fitted_model = joblib.load(model_path)
_, ext = os.path.splitext(model_path)
if ext == '.pt':
# Load the fc-tcn torch model.
assert _torch_present
if torch.cuda.is_available():
map_location = map_location_cuda
else:
map_location = 'cpu'
with open(model_path, 'rb') as fh:
fitted_model = torch.load(fh, map_location=map_location)
else:
# Load the sklearn pipeline.
fitted_model = joblib.load(model_path)
if hasattr(fitted_model, 'get_lookback'):
lookback = fitted_model.get_lookback()

View File

@@ -87,7 +87,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -594,7 +594,7 @@
"metadata": {
"authors": [
{
"name": "erwright"
"name": "jialiu"
}
],
"category": "tutorial",

View File

@@ -1,22 +1,24 @@
import argparse
import azureml.train.automl
from azureml.core import Run
from azureml.core import Dataset, Run
from sklearn.externals import joblib
parser = argparse.ArgumentParser()
parser.add_argument(
'--target_column_name', type=str, dest='target_column_name',
help='Target Column Name')
parser.add_argument(
'--test_dataset', type=str, dest='test_dataset',
help='Test Dataset')
args = parser.parse_args()
target_column_name = args.target_column_name
test_dataset_id = args.test_dataset
run = Run.get_context()
# get input dataset by name
test_dataset = run.input_datasets['test_data']
ws = run.experiment.workspace
df = test_dataset.to_pandas_dataframe().reset_index(drop=True)
# get the input dataset by id
test_dataset = Dataset.get_by_id(ws, id=test_dataset_id)
X_test_df = test_dataset.drop_columns(columns=[target_column_name]).to_pandas_dataframe().reset_index(drop=True)
y_test_df = test_dataset.with_timestamp_columns(None).keep_columns(columns=[target_column_name]).to_pandas_dataframe()

View File

@@ -1,29 +1,32 @@
from azureml.train.estimator import Estimator
from azureml.core import ScriptRunConfig
def run_rolling_forecast(test_experiment, compute_target, train_run, test_dataset,
target_column_name, inference_folder='./forecast'):
def run_rolling_forecast(test_experiment, compute_target, train_run,
test_dataset, target_column_name,
inference_folder='./forecast'):
train_run.download_file('outputs/model.pkl',
inference_folder + '/model.pkl')
inference_env = train_run.get_environment()
est = Estimator(source_directory=inference_folder,
entry_script='forecasting_script.py',
script_params={
'--target_column_name': target_column_name
},
inputs=[test_dataset.as_named_input('test_data')],
compute_target=compute_target,
environment_definition=inference_env)
config = ScriptRunConfig(source_directory=inference_folder,
script='forecasting_script.py',
arguments=['--target_column_name',
target_column_name,
'--test_dataset',
test_dataset.as_named_input(test_dataset.name)],
compute_target=compute_target,
environment=inference_env)
run = test_experiment.submit(est,
tags={
'training_run_id': train_run.id,
'run_algorithm': train_run.properties['run_algorithm'],
'valid_score': train_run.properties['score'],
'primary_metric': train_run.properties['primary_metric']
})
run = test_experiment.submit(config,
tags={'training_run_id':
train_run.id,
'run_algorithm':
train_run.properties['run_algorithm'],
'valid_score':
train_run.properties['score'],
'primary_metric':
train_run.properties['primary_metric']})
run.log("run_algorithm", run.tags['run_algorithm'])
return run

View File

@@ -97,7 +97,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -703,7 +703,7 @@
"metadata": {
"authors": [
{
"name": "erwright"
"name": "jialiu"
}
],
"categories": [

View File

@@ -24,7 +24,7 @@
"metadata": {},
"source": [
"## Introduction\n",
"This notebook demonstrates the full interface to the `forecast()` function. \n",
"This notebook demonstrates the full interface of the `forecast()` function. \n",
"\n",
"The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. \n",
"\n",
@@ -94,7 +94,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -809,7 +809,7 @@
"metadata": {
"authors": [
{
"name": "erwright"
"name": "jialiu"
}
],
"category": "tutorial",

View File

@@ -82,7 +82,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -325,12 +325,11 @@
"source": [
"## Customization\n",
"\n",
"The featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,\n",
"The featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:\n",
"\n",
"1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.\n",
"2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.\n",
"3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.\n",
"\n",
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)"
"3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data."
]
},
{
@@ -383,7 +382,7 @@
"The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.\n",
"\n",
"We note here that AutoML can sweep over two types of time-series models:\n",
"* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace#upgrade).\n",
"* Models that are trained for each series such as ARIMA and Facebook's Prophet.\n",
"* Models trained across multiple time-series using a regression approach.\n",
"\n",
"In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. \n",
@@ -764,7 +763,7 @@
"metadata": {
"authors": [
{
"name": "erwright"
"name": "jialiu"
}
],
"category": "tutorial",

View File

@@ -96,7 +96,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -359,7 +359,7 @@
"Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data.\n",
"\n",
"### Run the explanation\n",
"#### Download engineered feature importance from artifact store\n",
"#### Download the engineered feature importance from artifact store\n",
"You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
]
},
@@ -375,6 +375,25 @@
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download the raw feature importance from artifact store\n",
"You can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"raw_explanations = client.download_model_explanation(raw=True)\n",
"print(raw_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -474,6 +493,29 @@
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Use Mimic Explainer for computing and visualizing raw feature importance\n",
"The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Compute the raw explanations\n",
"raw_explanations = explainer.explain(['local', 'global'], get_raw=True,\n",
" raw_feature_names=automl_explainer_setup_obj.raw_feature_names,\n",
" eval_dataset=automl_explainer_setup_obj.X_test_transform,\n",
" raw_eval_dataset=automl_explainer_setup_obj.X_test_raw)\n",
"print(raw_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -589,10 +631,13 @@
" automl_explainer_setup_obj = automl_setup_model_explanations(automl_model,\n",
" X_test=data, task='classification')\n",
" # Retrieve model explanations for engineered explanations\n",
" engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) \n",
" engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform)\n",
" # Retrieve model explanations for raw explanations\n",
" raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True)\n",
" # You can return any data type as long as it is JSON-serializable\n",
" return {'predictions': predictions.tolist(),\n",
" 'engineered_local_importance_values': engineered_local_importance_values}\n"
" 'engineered_local_importance_values': engineered_local_importance_values,\n",
" 'raw_local_importance_values': raw_local_importance_values}\n"
]
},
{
@@ -725,7 +770,9 @@
"# Print the predicted value\n",
"print('predictions:\\n{}\\n'.format(output['predictions']))\n",
"# Print the engineered feature importances for the predicted value\n",
"print('engineered_local_importance_values:\\n{}\\n'.format(output['engineered_local_importance_values']))"
"print('engineered_local_importance_values:\\n{}\\n'.format(output['engineered_local_importance_values']))\n",
"# Print the raw feature importances for the predicted value\n",
"print('raw_local_importance_values:\\n{}\\n'.format(output['raw_local_importance_values']))\n"
]
},
{
@@ -773,7 +820,7 @@
"metadata": {
"authors": [
{
"name": "anumamah"
"name": "ratanase"
}
],
"category": "tutorial",

View File

@@ -42,8 +42,6 @@
"\n",
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
"\n",
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade) \n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Instantiating AutoMLConfig with FeaturizationConfig for customization\n",
@@ -98,7 +96,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -223,9 +221,8 @@
"source": [
"## Customization\n",
"\n",
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade). \n",
"\n",
"Supported customization includes:\n",
"\n",
"1. Column purpose update: Override feature type for the specified column.\n",
"2. Transformer parameter update: Update parameters for the specified transformer. Currently supports Imputer and HashOneHotEncoder.\n",
"3. Drop columns: Columns to drop from being featurized.\n",
@@ -447,7 +444,6 @@
"metadata": {},
"source": [
"## Explanations\n",
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade). \n",
"This section will walk you through the workflow to compute model explanations for an AutoML model on your remote compute.\n",
"\n",
"### Retrieve any AutoML Model for explanations\n",
@@ -655,7 +651,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Operationailze\n",
"## Operationalize\n",
"In this section we will show how you can operationalize an AutoML model and the explainer which was used to compute the explanations in the previous section.\n",
"\n",
"### Register the AutoML model and the scoring explainer\n",
@@ -905,7 +901,7 @@
"metadata": {
"authors": [
{
"name": "anumamah"
"name": "anshirga"
}
],
"categories": [

View File

@@ -4,7 +4,7 @@ import os
import joblib
from interpret.ext.glassbox import LGBMExplainableModel
from automl.client.core.common.constants import MODEL_PATH
from azureml.automl.core.shared.constants import MODEL_PATH
from azureml.core.experiment import Experiment
from azureml.core.dataset import Dataset
from azureml.core.run import Run

View File

@@ -92,7 +92,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -462,7 +462,7 @@
"metadata": {
"authors": [
{
"name": "rakellam"
"name": "ratanase"
}
],
"categories": [

View File

@@ -17,9 +17,9 @@
"\n",
"**For Databricks non ML runtime 7.1(scala 2.21, spark 3.0.0) and up, Install AML sdk by running the following command in the first cell of the notebook.**\n",
"\n",
"%pip install -r https://aka.ms/automl_linux_requirements.txt\n",
"%pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt\n",
"\n",
"**For Databricks non ML runtime 7.0 and lower, Install AML sdk using init script as shown in [readme](readme.md) before running this notebook.**\n"
"**For Databricks non ML runtime 7.0 and lower, Install AML sdk using init script as shown in [readme](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-databricks/automl/README.md) before running this notebook.**\n"
]
},
{

View File

@@ -604,4 +604,4 @@
},
"nbformat": 4,
"nbformat_minor": 1
}
}

View File

@@ -276,21 +276,24 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.exceptions import ComputeTargetException\n",
"from azureml.core.compute import ComputeTarget, AksCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"aks_name = \"my-aks\"\n",
"aks_name = \"my-aks-insights\"\n",
"\n",
"creating_compute = False\n",
"try:\n",
" aks_target = ComputeTarget(ws, aks_name)\n",
" print(\"Using existing AKS cluster {}.\".format(aks_name))\n",
" print(\"Using existing AKS compute target {}.\".format(aks_name))\n",
"except ComputeTargetException:\n",
" print(\"Creating a new AKS cluster {}.\".format(aks_name))\n",
" print(\"Creating a new AKS compute target {}.\".format(aks_name))\n",
"\n",
" # Use the default configuration (can also provide parameters to customize).\n",
" prov_config = AksCompute.provisioning_configuration()\n",
" aks_target = ComputeTarget.create(workspace=ws,\n",
" name=aks_name,\n",
" provisioning_configuration=prov_config)"
" provisioning_configuration=prov_config)\n",
" creating_compute = True"
]
},
{
@@ -300,7 +303,7 @@
"outputs": [],
"source": [
"%%time\n",
"if aks_target.provisioning_state != \"Succeeded\":\n",
"if creating_compute and aks_target.provisioning_state != \"Succeeded\":\n",
" aks_target.wait_for_completion(show_output=True)"
]
},
@@ -380,7 +383,7 @@
" aks_service.wait_for_deployment(show_output=True)\n",
" print(aks_service.state)\n",
"else:\n",
" raise ValueError(\"AKS provisioning failed. Error: \", aks_service.error)"
" raise ValueError(\"AKS cluster provisioning failed. Error: \", aks_target.provisioning_errors)"
]
},
{
@@ -458,7 +461,9 @@
"%%time\n",
"aks_service.delete()\n",
"aci_service.delete()\n",
"model.delete()"
"model.delete()\n",
"if creating_compute:\n",
" aks_target.delete()"
]
}
],

View File

@@ -23,7 +23,7 @@
"# Train and explain models remotely via Azure Machine Learning Compute\n",
"\n",
"\n",
"_**This notebook showcases how to use the Azure Machine Learning Interpretability SDK to train and explain a regression model remotely on an Azure Machine Leanrning Compute Target (AMLCompute).**_\n",
"_**This notebook showcases how to use the Azure Machine Learning Interpretability SDK to train and explain a regression model remotely on an Azure Machine Learning Compute Target (AMLCompute).**_\n",
"\n",
"\n",
"\n",
@@ -35,10 +35,7 @@
" 1. Initialize a Workspace\n",
" 1. Create an Experiment\n",
" 1. Introduction to AmlCompute\n",
" 1. Submit an AmlCompute run in a few different ways\n",
" 1. Option 1: Provision as a run based compute target \n",
" 1. Option 2: Provision as a persistent compute target (Basic)\n",
" 1. Option 3: Provision as a persistent compute target (Advanced)\n",
" 1. Submit an AmlCompute run\n",
"1. Additional operations to perform on AmlCompute\n",
"1. [Download model explanations from Azure Machine Learning Run History](#Download)\n",
"1. [Visualize explanations](#Visualize)\n",
@@ -158,7 +155,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Submit an AmlCompute run in a few different ways\n",
"## Submit an AmlCompute run\n",
"\n",
"First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.\n",
"\n",
@@ -204,7 +201,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Option 1: Provision a compute target (Basic)\n",
"### Provision a compute target\n",
"\n",
"You can provision an AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.\n",
"\n",
@@ -327,183 +324,6 @@
"run.get_metrics()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Option 2: Provision a compute target (Advanced)\n",
"\n",
"You can also specify additional properties or change defaults while provisioning AmlCompute using a more advanced configuration. This is useful when you want a dedicated cluster of 4 nodes (for example you can set the min_nodes and max_nodes to 4), or want the compute to be within an existing VNet in your subscription.\n",
"\n",
"In addition to `vm_size` and `max_nodes`, you can specify:\n",
"* `min_nodes`: Minimum nodes (default 0 nodes) to downscale to while running a job on AmlCompute\n",
"* `vm_priority`: Choose between 'dedicated' (default) and 'lowpriority' VMs when provisioning AmlCompute. Low Priority VMs use Azure's excess capacity and are thus cheaper but risk your run being pre-empted\n",
"* `idle_seconds_before_scaledown`: Idle time (default 120 seconds) to wait after run completion before auto-scaling to min_nodes\n",
"* `vnet_resourcegroup_name`: Resource group of the **existing** VNet within which AmlCompute should be provisioned\n",
"* `vnet_name`: Name of VNet\n",
"* `subnet_name`: Name of SubNet within the VNet"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpu-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',\n",
" vm_priority='lowpriority',\n",
" min_nodes=2,\n",
" max_nodes=4,\n",
" idle_seconds_before_scaledown='300',\n",
" vnet_resourcegroup_name='<my-resource-group>',\n",
" vnet_name='<my-vnet-name>',\n",
" subnet_name='<my-subnet-name>')\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"cpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure & Run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# Create a new RunConfig object\n",
"run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to AmlCompute target created in previous step\n",
"run_config.target = cpu_cluster.name\n",
"\n",
"# Enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"azureml_pip_packages = [\n",
" 'azureml-defaults', 'azureml-contrib-interpret', 'azureml-telemetry', 'azureml-interpret'\n",
"]\n",
"\n",
"\n",
"\n",
"# Note: this is to pin the scikit-learn and pandas versions to be same as notebook.\n",
"# In production scenario user would choose their dependencies\n",
"import pkg_resources\n",
"available_packages = pkg_resources.working_set\n",
"sklearn_ver = None\n",
"pandas_ver = None\n",
"for dist in available_packages:\n",
" if dist.key == 'scikit-learn':\n",
" sklearn_ver = dist.version\n",
" elif dist.key == 'pandas':\n",
" pandas_ver = dist.version\n",
"sklearn_dep = 'scikit-learn'\n",
"pandas_dep = 'pandas'\n",
"if sklearn_ver:\n",
" sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)\n",
"if pandas_ver:\n",
" pandas_dep = 'pandas=={}'.format(pandas_ver)\n",
"# Specify CondaDependencies obj\n",
"# The CondaDependencies specifies the conda and pip packages that are installed in the environment\n",
"# the submitted job is run in. Note the remote environment(s) needs to be similar to the local\n",
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
"# cause errors. Please take extra care when specifying your dependencies in a production environment.\n",
"azureml_pip_packages.extend([sklearn_dep, pandas_dep])\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(pip_packages=azureml_pip_packages)\n",
"\n",
"from azureml.core import Run\n",
"from azureml.core import ScriptRunConfig\n",
"\n",
"src = ScriptRunConfig(source_directory=project_folder, \n",
" script='train_explain.py', \n",
" run_config=run_config) \n",
"run = experiment.submit(config=src)\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# Shows output of the run on stdout.\n",
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.get_metrics()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Additional operations to perform on AmlCompute\n",
"\n",
"You can perform more operations on AmlCompute such as updating the node counts or deleting the compute. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get_status () gets the latest status of the AmlCompute target\n",
"cpu_cluster.get_status().serialize()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Update () takes in the min_nodes, max_nodes and idle_seconds_before_scaledown and updates the AmlCompute target\n",
"# cpu_cluster.update(min_nodes=1)\n",
"# cpu_cluster.update(max_nodes=10)\n",
"cpu_cluster.update(idle_seconds_before_scaledown=300)\n",
"# cpu_cluster.update(min_nodes=2, max_nodes=4, idle_seconds_before_scaledown=600)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Delete () is used to deprovision and delete the AmlCompute target. Useful if you want to re-use the compute name \n",
"# 'cpu-cluster' in this case but use a different VM family for instance.\n",
"\n",
"# cpu_cluster.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -44,9 +44,11 @@
"import azureml.core\n",
"from azureml.core import Workspace, Experiment, Datastore, Dataset\n",
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.exceptions import ComputeTargetException\n",
"from azureml.pipeline.steps import HyperDriveStep, HyperDriveStepRun\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.steps import HyperDriveStep, HyperDriveStepRun, PythonScriptStep\n",
"from azureml.pipeline.core import Pipeline, PipelineData, TrainingOutput\n",
"from azureml.train.dnn import TensorFlow\n",
"# from azureml.train.hyperdrive import *\n",
"from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal\n",
@@ -230,9 +232,24 @@
" max_nodes=4)\n",
"\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
" compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n",
"compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n",
"\n",
"print(\"Azure Machine Learning Compute attached\")"
"print(\"Azure Machine Learning Compute attached\")\n",
"\n",
"cpu_cluster_name = \"cpu-cluster\"\n",
"\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print(\"Found existing cpu-cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new cpu-cluster\")\n",
" \n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_D2_V2\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
" \n",
"cpu_cluster.wait_for_completion(show_output=True)"
]
},
{
@@ -400,8 +417,16 @@
"source": [
"metrics_output_name = 'metrics_output'\n",
"metrics_data = PipelineData(name='metrics_data',\n",
" datastore=datastore,\n",
" pipeline_output_name=metrics_output_name)\n",
" datastore=datastore,\n",
" pipeline_output_name=metrics_output_name,\n",
" training_output=TrainingOutput(\"Metrics\"))\n",
"\n",
"model_output_name = 'model_output'\n",
"saved_model = PipelineData(name='saved_model',\n",
" datastore=datastore,\n",
" pipeline_output_name=model_output_name,\n",
" training_output=TrainingOutput(\"Model\",\n",
" model_file=\"outputs/model/saved_model.pb\"))\n",
"\n",
"hd_step_name='hd_step01'\n",
"hd_step = HyperDriveStep(\n",
@@ -409,7 +434,39 @@
" hyperdrive_config=hd_config,\n",
" estimator_entry_script_arguments=['--data-folder', data_folder],\n",
" inputs=[data_folder],\n",
" metrics_output=metrics_data)"
" outputs=[metrics_data, saved_model])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Find and register best model\n",
"When all the jobs finish, we can choose to register the model that has the highest accuracy through an additional PythonScriptStep.\n",
"\n",
"Through this additional register_model_step, we register the chosen files as a model named `tf-dnn-mnist` under the workspace for deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"conda_dep = CondaDependencies()\n",
"conda_dep.add_pip_package(\"azureml-sdk\")\n",
"\n",
"rcfg = RunConfiguration(conda_dependencies=conda_dep)\n",
"\n",
"register_model_step = PythonScriptStep(script_name='register_model.py',\n",
" name=\"register_model_step01\",\n",
" inputs=[saved_model],\n",
" compute_target=cpu_cluster,\n",
" arguments=[\"--saved-model\", saved_model],\n",
" allow_reuse=True,\n",
" runconfig=rcfg)\n",
"\n",
"register_model_step.run_after(hd_step)"
]
},
{
@@ -425,7 +482,7 @@
"metadata": {},
"outputs": [],
"source": [
"pipeline = Pipeline(workspace=ws, steps=[hd_step])\n",
"pipeline = Pipeline(workspace=ws, steps=[hd_step, register_model_step])\n",
"pipeline_run = exp.submit(pipeline)"
]
},
@@ -500,58 +557,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Find and register best model\n",
"When all the jobs finish, we can find out the one that has the highest accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"hd_step_run = HyperDriveStepRun(step_run=pipeline_run.find_step_run(hd_step_name)[0])\n",
"best_run = hd_step_run.get_best_run_by_primary_metric()\n",
"best_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's list the model files uploaded during the run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(best_run.get_file_names())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can then register the folder (and all files in it) as a model named `tf-dnn-mnist` under the workspace for deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model = best_run.register_model(model_name='tf-dnn-mnist', model_path='outputs/model')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For model deployment, please refer to [Training, hyperparameter tune, and deploy with TensorFlow](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/deployment/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb)."
"For model deployment, please refer to [Training, hyperparameter tune, and deploy with TensorFlow](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb)."
]
}
],

View File

@@ -19,8 +19,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to Setup a Schedule for a Published Pipeline\n",
"In this notebook, we will show you how you can run an already published pipeline on a schedule."
"# How to Setup a Schedule for a Published Pipeline or Pipeline Endpoint\n",
"In this notebook, we will show you how you can run an already published pipeline or a pipeline endpoint on a schedule."
]
},
{
@@ -159,6 +159,43 @@
"print(\"Newly published pipeline id: {}\".format(published_pipeline1.id))"
]
},
{
"cell_type": "markdown",
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
},
"source": [
"### Create a Pipeline Endpoint\n",
"Alternatively, you can create a schedule to run a pipeline endpoint instead of a published pipeline. You will need this to create a schedule against a pipeline endpoint in the last section of this notebook. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"from azureml.pipeline.core import PipelineEndpoint\n",
"\n",
"pipeline_endpoint = PipelineEndpoint.publish(workspace=ws, name=\"ScheduledPipelineEndpoint\",\n",
" pipeline=pipeline1, description=\"Publish pipeline endpoint for schedule test\")\n",
"pipeline_endpoint"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -196,14 +233,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a schedule for the pipeline using a recurrence\n",
"### Create a schedule for the published pipeline using a recurrence\n",
"This schedule will run on a specified recurrence interval."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"from azureml.pipeline.core.schedule import ScheduleRecurrence, Schedule\n",
@@ -308,7 +355,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1606157800044
}
},
"outputs": [],
"source": [
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
@@ -410,7 +461,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1606157862620
}
},
"outputs": [],
"source": [
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
@@ -419,14 +474,151 @@
"schedule = Schedule.get(ws, schedule_id)\n",
"print(\"Disabled schedule {}. New status is: {}\".format(schedule.id, schedule.status))"
]
},
{
"cell_type": "markdown",
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
},
"source": [
"### Create a schedule for a pipeline endpoint\n",
"Alternative to creating schedules for a published pipeline, you can also create schedules to run pipeline endpoints.\n",
"Retrieve the pipeline endpoint id to create a schedule. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1606157888851
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"pipeline_endpoint_by_name = PipelineEndpoint.get(workspace=ws, name=\"ScheduledPipelineEndpoint\")\n",
"published_pipeline_endpoint_id = pipeline_endpoint_by_name.id\n",
"\n",
"recurrence = ScheduleRecurrence(frequency=\"Day\", interval=2, hours=[22], minutes=[30]) # Runs every other day at 10:30pm\n",
"\n",
"schedule = Schedule.create_for_pipeline_endpoint(workspace=ws, name=\"My_Endpoint_Schedule\",\n",
" pipeline_endpoint_id=published_pipeline_endpoint_id,\n",
" experiment_name='Schedule_Run',\n",
" recurrence=recurrence, description=\"Schedule_Run\",\n",
" wait_for_provisioning=True)\n",
"\n",
"# You may want to make sure that the schedule is provisioned properly\n",
"# before making any further changes to the schedule\n",
"\n",
"print(\"Created schedule with id: {}\".format(schedule.id))"
]
},
{
"cell_type": "markdown",
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
},
"source": [
"### Get all schedules for a given pipeline endpoint\n",
"Once you have the pipeline endpoint ID, then you can get all schedules for that pipeline endopint."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"schedules_for_pipeline_endpoints = Schedule.\\\n",
" get_schedules_for_pipeline_endpoint_id(ws,\n",
" pipeline_endpoint_id=published_pipeline_endpoint_id)\n",
"print('Got all schedules for pipeline endpoint:', published_pipeline_endpoint_id, 'Count:',\n",
" len(schedules_for_pipeline_endpoints))\n",
"\n",
"print('done')"
]
},
{
"cell_type": "markdown",
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
},
"source": [
"### Disable the schedule created for running the pipeline endpont\n",
"Recall the best practice of disabling schedules when not in use.\n",
"The number of schedule triggers allowed per month per region per subscription is 100,000.\n",
"This is calculated using the project trigger counts for all active schedules."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
"print(\"Using schedule with id: {}\".format(fetched_schedule.id))\n",
"\n",
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
"# for the call to provision the schedule in the backend.\n",
"fetched_schedule.disable(wait_for_provisioning=True)\n",
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
"print(\"Disabled schedule {}. New status is: {}\".format(fetched_schedule.id, fetched_schedule.status))"
]
}
],
"metadata": {
"authors": [
{
"name": "sanpil"
"name": "shbijlan"
}
],
"categories": [
"how-to-use-azureml",
"machine-learning-pipelines",
"intro-to-pipelines"
],
"category": "tutorial",
"compute": [
"AML Compute"
@@ -441,7 +633,7 @@
"framework": [
"Azure ML"
],
"friendly_name": "How to Setup a Schedule for a Published Pipeline",
"friendly_name": "How to Setup a Schedule for a Published Pipeline or Pipeline Endpoint",
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -459,6 +651,9 @@
"pygments_lexer": "ipython3",
"version": "3.6.7"
},
"nteract": {
"version": "nteract-front-end@1.0.0"
},
"order_index": 10,
"star_tag": [
"featured"
@@ -466,7 +661,7 @@
"tags": [
"None"
],
"task": "Demonstrates the use of Schedules for Published Pipelines"
"task": "Demonstrates the use of Schedules for Published Pipelines and Pipeline endpoints"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -30,7 +30,7 @@
"## Introduction\n",
"In this example we showcase how you can use AzureML Dataset to load data for AutoML via AML Pipeline. \n",
"\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you have executed the [configuration](https://aka.ms/pl-config) before running this notebook.\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you have executed the [configuration](https://aka.ms/pl-config) before running this notebook, please also take a look at the [Automated ML setup-using-a-local-conda-environment](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning#setup-using-a-local-conda-environment) section to setup the environment.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",

View File

@@ -2,7 +2,3 @@ name: aml-pipelines-with-automated-machine-learning-step
dependencies:
- pip:
- azureml-sdk
- azureml-train-automl
- azureml-widgets
- matplotlib
- pandas_ml

View File

@@ -0,0 +1,21 @@
import argparse
import json
import os
import azureml.core
from azureml.core import Workspace, Experiment, Model
from azureml.core import Run
from azureml.train.hyperdrive import HyperDriveRun
from shutil import copy2
parser = argparse.ArgumentParser()
parser.add_argument('--saved-model', type=str, dest='saved_model', help='path to saved model file')
args = parser.parse_args()
model_output_dir = './model/'
os.makedirs(model_output_dir, exist_ok=True)
copy2(args.saved_model, model_output_dir)
ws = Run.get_context().experiment.workspace
model = Model.register(workspace=ws, model_name='tf-dnn-mnist', model_path=model_output_dir)

View File

@@ -284,7 +284,7 @@
"# Specify CondaDependencies obj, add necessary packages\n",
"aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(\n",
" conda_packages=['pandas','scikit-learn'], \n",
" pip_packages=['azureml-sdk[automl,explain]', 'pyarrow'])\n",
" pip_packages=['azureml-sdk[automl]', 'pyarrow'])\n",
"\n",
"print (\"Run configuration created.\")"
]
@@ -460,8 +460,8 @@
" name=\"Merge Taxi Data\",\n",
" script_name=\"merge.py\", \n",
" arguments=[\"--output_merge\", merged_data],\n",
" inputs=[cleansed_green_data.parse_parquet_files(file_extension=None),\n",
" cleansed_yellow_data.parse_parquet_files(file_extension=None)],\n",
" inputs=[cleansed_green_data.parse_parquet_files(),\n",
" cleansed_yellow_data.parse_parquet_files()],\n",
" outputs=[merged_data],\n",
" compute_target=aml_compute,\n",
" runconfig=aml_run_config,\n",
@@ -497,7 +497,7 @@
" name=\"Filter Taxi Data\",\n",
" script_name=\"filter.py\", \n",
" arguments=[\"--output_filter\", filtered_data],\n",
" inputs=[merged_data.parse_parquet_files(file_extension=None)],\n",
" inputs=[merged_data.parse_parquet_files()],\n",
" outputs=[filtered_data],\n",
" compute_target=aml_compute,\n",
" runconfig = aml_run_config,\n",
@@ -533,7 +533,7 @@
" name=\"Normalize Taxi Data\",\n",
" script_name=\"normalize.py\", \n",
" arguments=[\"--output_normalize\", normalized_data],\n",
" inputs=[filtered_data.parse_parquet_files(file_extension=None)],\n",
" inputs=[filtered_data.parse_parquet_files()],\n",
" outputs=[normalized_data],\n",
" compute_target=aml_compute,\n",
" runconfig = aml_run_config,\n",
@@ -574,7 +574,7 @@
" name=\"Transform Taxi Data\",\n",
" script_name=\"transform.py\", \n",
" arguments=[\"--output_transform\", transformed_data],\n",
" inputs=[normalized_data.parse_parquet_files(file_extension=None)],\n",
" inputs=[normalized_data.parse_parquet_files()],\n",
" outputs=[transformed_data],\n",
" compute_target=aml_compute,\n",
" runconfig = aml_run_config,\n",
@@ -614,7 +614,7 @@
" script_name=\"train_test_split.py\", \n",
" arguments=[\"--output_split_train\", output_split_train,\n",
" \"--output_split_test\", output_split_test],\n",
" inputs=[transformed_data.parse_parquet_files(file_extension=None)],\n",
" inputs=[transformed_data.parse_parquet_files()],\n",
" outputs=[output_split_train, output_split_test],\n",
" compute_target=aml_compute,\n",
" runconfig = aml_run_config,\n",
@@ -690,7 +690,7 @@
" \"n_cross_validations\": 5\n",
"}\n",
"\n",
"training_dataset = output_split_train.parse_parquet_files(file_extension=None).keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])\n",
"training_dataset = output_split_train.parse_parquet_files().keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])\n",
"\n",
"automl_config = AutoMLConfig(task = 'regression',\n",
" debug_log = 'automated_ml_errors.log',\n",

View File

@@ -180,7 +180,9 @@
"metadata": {},
"source": [
"### Create a FileDataset\n",
"A [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred."
"A [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred.",
"\n",
"You can use dataset objects as inputs. Register the datasets to the workspace if you want to reuse them later."
]
},
{

View File

@@ -160,7 +160,8 @@
"metadata": {},
"source": [
"### Create a TabularDataset\n",
"A [TabularDataSet](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) references single or multiple files which contain data in a tabular structure (ie like CSV files) in your datastores or public urls. TabularDatasets provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred."
"A [TabularDataSet](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) references single or multiple files which contain data in a tabular structure (ie like CSV files) in your datastores or public urls. TabularDatasets provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred.\n",
"You can use dataset objects as inputs. Register the datasets to the workspace if you want to reuse them later."
]
},
{
@@ -175,8 +176,7 @@
"\n",
"path_on_datastore = iris_data.path('iris/')\n",
"input_iris_ds = Dataset.Tabular.from_delimited_files(path=path_on_datastore, validate=False)\n",
"registered_iris_ds = input_iris_ds.register(ws, iris_ds_name, create_new_version=True)\n",
"named_iris_ds = registered_iris_ds.as_named_input(iris_ds_name)"
"named_iris_ds = input_iris_ds.as_named_input(iris_ds_name)"
]
},
{

View File

@@ -121,6 +121,33 @@
" auth=interactive_auth)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Despite having access to the workspace, you may sometimes see the following error when retrieving it:\n",
"\n",
"```\n",
"You are currently logged-in to xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx tenant. You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription, please check if it is in this tenant.\n",
"```\n",
"\n",
"This error sometimes occurs when you are trying to access a subscription to which you were recently added. In this case, you need to force authentication again to avoid using a cached authentication token that has not picked up the new permissions. You can do so by setting `force=true` on the `InteractiveLoginAuthentication()` object's constructor as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"forced_interactive_auth = InteractiveLoginAuthentication(tenant_id=\"my-tenant-id\", force=True)\n",
"\n",
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=forced_interactive_auth)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -408,7 +435,7 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment, Run\n",
"from azureml.core import Experiment\n",
"from azureml.core.script_run_config import ScriptRunConfig\n",
"\n",
"exp = Experiment(workspace = ws, name=\"try-secret\")\n",
@@ -424,13 +451,6 @@
"source": [
"Furthermore, you can set and get multiple secrets using set_secrets and get_secrets methods."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -136,7 +136,7 @@
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" compute_target.wait_for_completion(show_output=True)\n",
"compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
@@ -606,14 +606,32 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** `print(service.get_logs())`"
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(service.get_logs())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is the scoring web service endpoint: `print(service.scoring_uri)`"
"This is the scoring web service endpoint:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(service.scoring_uri)"
]
},
{
@@ -742,7 +760,7 @@
"metadata": {
"authors": [
{
"name": "swatig"
"name": "nagaur"
}
],
"category": "training",

View File

@@ -308,9 +308,9 @@
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it uses the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"# can poll for a minimum number of nodes and for a specific timeout. \n",
"# if no min node count is provided it uses the scale settings for the cluster\n",
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
@@ -429,7 +429,8 @@
"dependencies:\n",
"- python=3.6.2\n",
"- pip:\n",
" - azureml-defaults==1.13.0\n",
" - h5py<=2.10.0\n",
" - azureml-defaults\n",
" - tensorflow-gpu==2.0.0\n",
" - keras<=2.3.1\n",
" - matplotlib"
@@ -981,6 +982,7 @@
"\n",
"cd = CondaDependencies.create()\n",
"cd.add_tensorflow_conda_package()\n",
"cd.add_conda_package('h5py<=2.10.0')\n",
"cd.add_conda_package('keras<=2.3.1')\n",
"cd.add_pip_package(\"azureml-defaults\")\n",
"cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')\n",
@@ -1031,7 +1033,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** `print(service.get_logs())`"
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(service.get_logs())"
]
},
{

View File

@@ -128,7 +128,7 @@
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" compute_target.wait_for_completion(show_output=True)\n",
"compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
@@ -714,7 +714,7 @@
"metadata": {
"authors": [
{
"name": "swatig"
"name": "nagaur"
}
],
"category": "training",

View File

@@ -153,9 +153,9 @@
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it uses the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"# can poll for a minimum number of nodes and for a specific timeout. \n",
"# if no min node count is provided it uses the scale settings for the cluster\n",
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
@@ -572,7 +572,7 @@
"metadata": {
"authors": [
{
"name": "swatig"
"name": "nagaur"
}
],
"category": "training",

View File

@@ -306,9 +306,9 @@
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it uses the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"# can poll for a minimum number of nodes and for a specific timeout. \n",
"# if no min node count is provided it uses the scale settings for the cluster\n",
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
@@ -852,7 +852,7 @@
"metadata": {
"authors": [
{
"name": "swatig"
"name": "nagaur"
}
],
"category": "training",

View File

@@ -322,9 +322,9 @@
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it uses the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"# can poll for a minimum number of nodes and for a specific timeout. \n",
"# if no min node count is provided it uses the scale settings for the cluster\n",
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
@@ -1135,7 +1135,7 @@
"metadata": {
"authors": [
{
"name": "swatig"
"name": "nagaur"
}
],
"category": "training",

View File

@@ -30,7 +30,6 @@ Using these samples, you will learn how to do the following.
| File/folder | Description |
|-------------------|--------------------------------------------|
| [devenv_setup.ipynb](setup/devenv_setup.ipynb) | Notebook to setup virtual network for using Azure Machine Learning. Needed for the Pong and Minecraft examples. |
| [cartpole_ci.ipynb](cartpole-on-compute-instance/cartpole_ci.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Instance |
| [cartpole_sc.ipynb](cartpole-on-single-compute/cartpole_sc.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Cluster (single node) |
| [pong_rllib.ipynb](atari-on-distributed-compute/pong_rllib.ipynb) | Notebook for distributed training of Pong agent using RLlib on multiple compute targets |
@@ -46,9 +45,7 @@ To make use of these samples, you need the following.
* An Azure Machine Learning Workspace in the resource group.
* Azure Machine Learning training compute. These samples use the VM sizes `STANDARD_NC6` and `STANDARD_D2_V2`. If these are not available in your region,
you can replace them with other sizes.
* A virtual network set up in the resource group for samples that use multiple compute targets. The Cartpole examples do not need a virtual network.
* The [devenv_setup.ipynb](setup/devenv_setup.ipynb) notebook shows you how to create a virtual network. You can alternatively use an existing virtual network, make sure it's in the same region as workspace is.
* Any network security group defined on the virtual network must allow network traffic on ports used by Azure infrastructure services. This is described in more detail in the [devenv_setup.ipynb](setup/devenv_setup.ipynb) notebook.
* A virtual network set up in the resource group for samples that use multiple compute targets. The Cartpole and Multi-agent Particle examples do not need a virtual network. Any network security group defined on the virtual network must allow network traffic on ports used by Azure infrastructure services. Sample instructions are provided in Atari Pong and Minecraft example notebooks.
## Setup

View File

@@ -57,7 +57,7 @@
"source": [
"## Prerequisite\n",
"\n",
"The user should have completed the [Reinforcement Learning in Azure Machine Learning - Setting Up Development Environment](../setup/devenv_setup.ipynb) to setup a virtual network. This virtual network will be used here for head and worker compute targets. It is highly recommended that the user should go through the [Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb) to understand the basics of Reinforcement Learning in Azure Machine Learning and Ray RLlib used in this notebook."
"It is highly recommended that the user should go through the [Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb) to understand the basics of Reinforcement Learning in Azure Machine Learning and Ray RLlib used in this notebook."
]
},
{
@@ -69,6 +69,7 @@
"\n",
"* Connecting to a workspace to enable communication between your local machine and remote resources\n",
"* Creating an experiment to track all your runs\n",
"* Setting up a virtual network\n",
"* Creating remote head and worker compute target on a virtual network to use for training"
]
},
@@ -140,9 +141,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Specify the name of your virtual network\n",
"### Create Virtual Network\n",
"\n",
"The resource group you use must contain a virtual network. Specify the name of the virtual network here created in the [Azure Machine Learning Reinforcement Learning Sample - Setting Up Development Environment](../setup/devenv_setup.ipynb)."
"If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.\n",
"\n",
"To do this, you first must install the Azure Networking API.\n",
"\n",
"`pip install --upgrade azure-mgmt-network`"
]
},
{
@@ -151,15 +156,132 @@
"metadata": {},
"outputs": [],
"source": [
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
"#!pip install --upgrade azure-mgmt-network"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azure.mgmt.network import NetworkManagementClient\n",
"\n",
"# Virtual network name\n",
"vnet_name = 'your_vnet'"
"vnet_name =\"rl_pong_vnet\"\n",
"\n",
"# Default subnet\n",
"subnet_name =\"default\"\n",
"\n",
"# The Azure subscription you are using\n",
"subscription_id=ws.subscription_id\n",
"\n",
"# The resource group for the reinforcement learning cluster\n",
"resource_group=ws.resource_group\n",
"\n",
"# Azure region of the resource group\n",
"location=ws.location\n",
"\n",
"network_client = NetworkManagementClient(ws._auth_object, subscription_id)\n",
"\n",
"async_vnet_creation = network_client.virtual_networks.create_or_update(\n",
" resource_group,\n",
" vnet_name,\n",
" {\n",
" 'location': location,\n",
" 'address_space': {\n",
" 'address_prefixes': ['10.0.0.0/16']\n",
" }\n",
" }\n",
")\n",
"\n",
"async_vnet_creation.wait()\n",
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
"### Set up Network Security Group on Virtual Network\n",
"\n",
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).\n",
"\n",
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
"\n",
"You may need to modify the code below to match your scenario."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azure.mgmt.network.models\n",
"\n",
"security_group_name = vnet_name + '-' + \"nsg\"\n",
"security_rule_name = \"AllowAML\"\n",
"\n",
"# Create a network security group\n",
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
" location=location,\n",
" security_rules=[\n",
" azure.mgmt.network.models.SecurityRule(\n",
" name=security_rule_name,\n",
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
" destination_address_prefix='*',\n",
" destination_port_range='29876-29877',\n",
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
" priority=400,\n",
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
" source_address_prefix='BatchNodeManagement',\n",
" source_port_range='*'\n",
" ),\n",
" ],\n",
")\n",
"\n",
"async_nsg_creation = network_client.network_security_groups.create_or_update(\n",
" resource_group,\n",
" security_group_name,\n",
" nsg_params,\n",
")\n",
"\n",
"async_nsg_creation.wait() \n",
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
"\n",
"network_security_group = network_client.network_security_groups.get(\n",
" resource_group,\n",
" security_group_name,\n",
")\n",
"\n",
"# Define a subnet to be created with network security group\n",
"subnet = azure.mgmt.network.models.Subnet(\n",
" id='default',\n",
" address_prefix='10.0.0.0/24',\n",
" network_security_group=network_security_group\n",
" )\n",
" \n",
"# Create subnet on virtual network\n",
"async_subnet_creation = network_client.subnets.create_or_update(\n",
" resource_group_name=resource_group,\n",
" virtual_network_name=vnet_name,\n",
" subnet_name=subnet_name,\n",
" subnet_parameters=subnet\n",
")\n",
"\n",
"async_subnet_creation.wait()\n",
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Review the virtual network security rules\n",
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
]
},
{

View File

@@ -152,6 +152,9 @@
"from azureml.core.compute import ComputeInstance\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"import random\n",
"import string\n",
"\n",
"# Load current compute instance info\n",
"current_compute_instance = load_nbvm()\n",
"\n",
@@ -160,7 +163,8 @@
" print(\"Current compute instance:\", current_compute_instance)\n",
" instance_name = current_compute_instance['instance']\n",
"else:\n",
" instance_name = \"cartpole-ci-stdd2v2\"\n",
" # Compute instance name needs to be unique across all existing compute instances within an Azure region\n",
" instance_name = \"cartpole-ci-\" + \"\".join(random.choice(string.ascii_lowercase) for _ in range(5))\n",
" try:\n",
" instance = ComputeInstance(workspace=ws, name=instance_name)\n",
" print('Found existing instance, use it.')\n",
@@ -176,7 +180,7 @@
"compute_target = ws.compute_targets[instance_name]\n",
"\n",
"print(\"Compute target status:\")\n",
"print(compute_target.get_status().serialize())\n"
"print(compute_target.get_status().serialize())"
]
},
{

View File

@@ -77,11 +77,6 @@
"workspace. For detailed instructions see [Tutorial: Get started creating\n",
"your first ML experiment.](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup)\n",
"\n",
"In addition, please follow the instructions in the [Reinforcement Learning in\n",
"Azure Machine Learning - Setting Up Development Environment](../setup/devenv_setup.ipynb)\n",
"notebook to correctly set up a Virtual Network which is required for completing \n",
"this tutorial.\n",
"\n",
"While this is a standalone notebook, we highly recommend going over the\n",
"introductory notebooks for RL first.\n",
"- Getting started:\n",
@@ -96,6 +91,7 @@
"This includes:\n",
"- Connecting to your existing Azure Machine Learning workspace.\n",
"- Creating an experiment to track runs.\n",
"- Setting up a virtual network\n",
"- Creating remote compute targets for [Ray](https://docs.ray.io/en/latest/index.html).\n",
"\n",
"### Azure Machine Learning SDK\n",
@@ -161,6 +157,164 @@
"exp = Experiment(workspace=ws, name='minecraft-maze')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Virtual Network\n",
"\n",
"If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.\n",
"\n",
"To do this, you first must install the Azure Networking API.\n",
"\n",
"`pip install --upgrade azure-mgmt-network`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
"#!pip install --upgrade azure-mgmt-network"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azure.mgmt.network import NetworkManagementClient\n",
"\n",
"# Virtual network name\n",
"vnet_name =\"rl_minecraft_vnet\"\n",
"\n",
"# Default subnet\n",
"subnet_name =\"default\"\n",
"\n",
"# The Azure subscription you are using\n",
"subscription_id=ws.subscription_id\n",
"\n",
"# The resource group for the reinforcement learning cluster\n",
"resource_group=ws.resource_group\n",
"\n",
"# Azure region of the resource group\n",
"location=ws.location\n",
"\n",
"network_client = NetworkManagementClient(ws._auth_object, subscription_id)\n",
"\n",
"async_vnet_creation = network_client.virtual_networks.create_or_update(\n",
" resource_group,\n",
" vnet_name,\n",
" {\n",
" 'location': location,\n",
" 'address_space': {\n",
" 'address_prefixes': ['10.0.0.0/16']\n",
" }\n",
" }\n",
")\n",
"\n",
"async_vnet_creation.wait()\n",
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up Network Security Group on Virtual Network\n",
"\n",
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).\n",
"\n",
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
"\n",
"You may need to modify the code below to match your scenario."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azure.mgmt.network.models\n",
"\n",
"security_group_name = vnet_name + '-' + \"nsg\"\n",
"security_rule_name = \"AllowAML\"\n",
"\n",
"# Create a network security group\n",
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
" location=location,\n",
" security_rules=[\n",
" azure.mgmt.network.models.SecurityRule(\n",
" name=security_rule_name,\n",
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
" destination_address_prefix='*',\n",
" destination_port_range='29876-29877',\n",
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
" priority=400,\n",
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
" source_address_prefix='BatchNodeManagement',\n",
" source_port_range='*'\n",
" ),\n",
" ],\n",
")\n",
"\n",
"async_nsg_creation = network_client.network_security_groups.create_or_update(\n",
" resource_group,\n",
" security_group_name,\n",
" nsg_params,\n",
")\n",
"\n",
"async_nsg_creation.wait() \n",
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
"\n",
"network_security_group = network_client.network_security_groups.get(\n",
" resource_group,\n",
" security_group_name,\n",
")\n",
"\n",
"# Define a subnet to be created with network security group\n",
"subnet = azure.mgmt.network.models.Subnet(\n",
" id='default',\n",
" address_prefix='10.0.0.0/24',\n",
" network_security_group=network_security_group\n",
" )\n",
" \n",
"# Create subnet on virtual network\n",
"async_subnet_creation = network_client.subnets.create_or_update(\n",
" resource_group_name=resource_group,\n",
" virtual_network_name=vnet_name,\n",
" subnet_name=subnet_name,\n",
" subnet_parameters=subnet\n",
")\n",
"\n",
"async_subnet_creation.wait()\n",
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Review the virtual network security rules\n",
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from files.networkutils import *\n",
"\n",
"check_vnet_security_rules(ws._auth_object, ws.subscription_id, ws.resource_group, vnet_name, True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -203,12 +357,6 @@
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# please enter the name of your Virtual Network (see Prerequisites -> Workspace setup)\n",
"vnet_name = 'your_vnet'\n",
"\n",
"# name of the Virtual Network subnet ('default' the default name)\n",
"subnet_name = 'default'\n",
"\n",
"gpu_cluster_name = 'gpu-cl-nc6-vnet'\n",
"\n",
"try:\n",

View File

@@ -1,262 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/reinforcement-learning/setup/devenv_setup.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Reinforcement Learning in Azure Machine Learning - Setting Up Development Environment\n",
"\n",
"Ray multi-node cluster setup requires all worker nodes to be able to communicate with the head node. This notebook explains you how to setup a virtual network, to be used by the Ray head and worker compute targets, created and used in other notebook examples."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisite\n",
"\n",
"The user should have completed the Azure Machine Learning Tutorial: [Get started creating your first ML experiment with the Python SDK](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup). You will need to make sure that you have a valid subscription ID, a resource group, and an Azure Machine Learning workspace."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Azure Machine Learning SDK \n",
"Display the Azure Machine Learning SDK version."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"Azure Machine Learning SDK Version: \", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get Azure Machine Learning workspace\n",
"Get a reference to an existing Azure Machine Learning workspace.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.location, ws.resource_group, sep = ' | ')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Virtual Network\n",
"\n",
"If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.\n",
"\n",
"To do this, you first must install the Azure Networking API.\n",
"\n",
"`pip install --upgrade azure-mgmt-network`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
"#!pip install --upgrade azure-mgmt-network"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azure.mgmt.network import NetworkManagementClient\n",
"\n",
"# Virtual network name\n",
"vnet_name =\"your_vnet\"\n",
"\n",
"# Default subnet\n",
"subnet_name =\"default\"\n",
"\n",
"# The Azure subscription you are using\n",
"subscription_id=ws.subscription_id\n",
"\n",
"# The resource group for the reinforcement learning cluster\n",
"resource_group=ws.resource_group\n",
"\n",
"# Azure region of the resource group\n",
"location=ws.location\n",
"\n",
"network_client = NetworkManagementClient(ws._auth_object, subscription_id)\n",
"\n",
"async_vnet_creation = network_client.virtual_networks.create_or_update(\n",
" resource_group,\n",
" vnet_name,\n",
" {\n",
" 'location': location,\n",
" 'address_space': {\n",
" 'address_prefixes': ['10.0.0.0/16']\n",
" }\n",
" }\n",
")\n",
"\n",
"async_vnet_creation.wait()\n",
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up Network Security Group on Virtual Network\n",
"\n",
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).\n",
"\n",
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
"\n",
"You may need to modify the code below to match your scenario."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azure.mgmt.network.models\n",
"\n",
"security_group_name = vnet_name + '-' + \"nsg\"\n",
"security_rule_name = \"AllowAML\"\n",
"\n",
"# Create a network security group\n",
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
" location=location,\n",
" security_rules=[\n",
" azure.mgmt.network.models.SecurityRule(\n",
" name=security_rule_name,\n",
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
" destination_address_prefix='*',\n",
" destination_port_range='29876-29877',\n",
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
" priority=400,\n",
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
" source_address_prefix='BatchNodeManagement',\n",
" source_port_range='*'\n",
" ),\n",
" ],\n",
")\n",
"\n",
"async_nsg_creation = network_client.network_security_groups.create_or_update(\n",
" resource_group,\n",
" security_group_name,\n",
" nsg_params,\n",
")\n",
"\n",
"async_nsg_creation.wait() \n",
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
"\n",
"network_security_group = network_client.network_security_groups.get(\n",
" resource_group,\n",
" security_group_name,\n",
")\n",
"\n",
"# Define a subnet to be created with network security group\n",
"subnet = azure.mgmt.network.models.Subnet(\n",
" id='default',\n",
" address_prefix='10.0.0.0/24',\n",
" network_security_group=network_security_group\n",
" )\n",
" \n",
"# Create subnet on virtual network\n",
"async_subnet_creation = network_client.subnets.create_or_update(\n",
" resource_group_name=resource_group,\n",
" virtual_network_name=vnet_name,\n",
" subnet_name=subnet_name,\n",
" subnet_parameters=subnet\n",
")\n",
"\n",
"async_subnet_creation.wait()\n",
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Review the virtual network security rules\n",
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from files.networkutils import *\n",
"\n",
"check_vnet_security_rules(ws._auth_object, ws.subscription_id, ws.resource_group, vnet_name, True)"
]
}
],
"metadata": {
"authors": [
{
"name": "vineetg"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00e2\u20ac\u00afLicensed under the MIT License.\u00e2\u20ac\u00af "
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,4 +0,0 @@
name: devenv_setup
dependencies:
- pip:
- azureml-sdk

View File

@@ -100,7 +100,7 @@
"\n",
"# Check core SDK version number\n",
"\n",
"print(\"This notebook was created using SDK version 1.16.0, you are currently running version\", azureml.core.VERSION)"
"print(\"This notebook was created using SDK version 1.19.0, you are currently running version\", azureml.core.VERSION)"
]
},
{

View File

@@ -37,7 +37,6 @@
"1. [Other ways to create environments](#Other-ways-to-create-environments)\n",
" 1. From existing Conda environment\n",
" 1. From Conda or pip files\n",
"1. [Estimators and environments](#Estimators-and-environments) \n",
"1. [Using environments for inferencing](#Using-environments-for-inferencing)\n",
"1. [Docker settings](#Docker-settings)\n",
"1. [Spark and Azure Databricks settings](#Spark-and-Azure-Databricks-settings)\n",
@@ -424,11 +423,9 @@
"source": [
"## Next steps\n",
"\n",
"Learn more about remote runs on different compute targets:\n",
"Train with ML frameworks on Azure ML:\n",
"\n",
"* [Train on ML Compute](../../training/train-on-amlcompute/train-on-amlcompute.ipynb)\n",
"\n",
"* [Train on remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb)\n",
"* [Train with ML frameworks](../../ml-frameworks)\n",
"\n",
"Learn more about registering and deploying a model:\n",
"\n",

View File

@@ -35,7 +35,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| :star:[How to use Pipeline Drafts to create a Published Pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-pipeline-drafts.ipynb) | Demonstrates the use of Pipeline Drafts | Custom | AML Compute | None | Azure ML | None |
| :star:[Azure Machine Learning Pipeline with HyperDriveStep](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-parameter-tuning-with-hyperdrive.ipynb) | Demonstrates the use of HyperDriveStep | Custom | AML Compute | None | Azure ML | None |
| :star:[How to Publish a Pipeline and Invoke the REST endpoint](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.ipynb) | Demonstrates the use of Published Pipelines | Custom | AML Compute | None | Azure ML | None |
| :star:[How to Setup a Schedule for a Published Pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-schedule-for-a-published-pipeline.ipynb) | Demonstrates the use of Schedules for Published Pipelines | Custom | AML Compute | None | Azure ML | None |
| :star:[How to Setup a Schedule for a Published Pipeline or Pipeline Endpoint](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-schedule-for-a-published-pipeline.ipynb) | Demonstrates the use of Schedules for Published Pipelines and Pipeline endpoints | Custom | AML Compute | None | Azure ML | None |
| [How to setup a versioned Pipeline Endpoint](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-versioned-pipeline-endpoints.ipynb) | Demonstrates the use of PipelineEndpoint to run a specific version of the Published Pipeline | Custom | AML Compute | None | Azure ML | None |
| :star:[How to use DataPath as a PipelineParameter](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.ipynb) | Demonstrates the use of DataPath as a PipelineParameter | Custom | AML Compute | None | Azure ML | None |
| :star:[How to use Dataset as a PipelineParameter](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-dataset-and-pipelineparameter.ipynb) | Demonstrates the use of Dataset as a PipelineParameter | Custom | AML Compute | None | Azure ML | None |
@@ -97,6 +97,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
## Other Notebooks
|Title| Task | Dataset | Training Compute | Deployment Target | ML Framework | Tags |
|:----|:-----|:-------:|:----------------:|:-----------------:|:------------:|:------------:|
| [DNN Text Featurization](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb) | Text featurization using DNNs for classification | None | AML Compute | None | None | None |
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) | | | | | | |
| [fairlearn-azureml-mitigation](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/fairlearn-azureml-mitigation.ipynb) | | | | | | |
| [upload-fairness-dashboard](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/upload-fairness-dashboard.ipynb) | | | | | | |
@@ -128,7 +129,6 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [cartpole_sc](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb) | | | | | | |
| [minecraft](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.ipynb) | | | | | | |
| [particle](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.ipynb) | | | | | | |
| [devenv_setup](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/setup/devenv_setup.ipynb) | | | | | | |
| [Logging APIs](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb) | Logging APIs and analyzing results | None | None | None | None | None |
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master//setup-environment/configuration.ipynb) | | | | | | |
| [tutorial-1st-experiment-sdk-train](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | | | | | | |
@@ -140,4 +140,5 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [img-classification-part2-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb) | | | | | | |
| [img-classification-part3-deploy-encrypted](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part3-deploy-encrypted.ipynb) | | | | | | |
| [tutorial-pipeline-batch-scoring-classification](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/machine-learning-pipelines-advanced/tutorial-pipeline-batch-scoring-classification.ipynb) | | | | | | |
| [azureml-quickstart](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/quickstart/azureml-quickstart.ipynb) | | | | | | |
| [regression-automated-ml](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb) | | | | | | |

View File

@@ -28,7 +28,7 @@ git clone https://github.com/Azure/MachineLearningNotebooks.git
pip install azureml-sdk[notebooks,tensorboard]
# install model explainability component
pip install azureml-sdk[explain]
pip install azureml-sdk[interpret]
# install automated ml components
pip install azureml-sdk[automl]
@@ -86,7 +86,7 @@ If you need additional Azure ML SDK components, you can either modify the Docker
pip install azureml-sdk[automl]
# install the core SDK and model explainability component
pip install azureml-sdk[explain]
pip install azureml-sdk[interpret]
# install the core SDK and experimental components
pip install azureml-sdk[contrib]

View File

@@ -102,7 +102,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -16,6 +16,7 @@ The following tutorials are intended to provide an introductory overview of Azur
| Tutorial | Description | Notebook | Task | Framework |
| --- | --- | --- | --- | --- |
| Azure Machine Learning in 10 minutes | Learn how to create and attach compute instances to notebooks, run an image classification model, track model metrics, and deploy a model| [quickstart](quickstart/azureml-quickstart.ipynb) | Learn Azure Machine Learning Concepts | PyTorch
| [Get Started (day1)](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) | Learn the fundamental concepts of Azure Machine Learning to help onboard your existing code to Azure Machine Learning. This tutorial focuses heavily on submitting machine learning jobs to scalable cloud-based compute clusters. | [get-started-day1](get-started-day1/day1-part1-setup.ipynb) | Learn Azure Machine Learning Concepts | PyTorch
| [Train your first ML Model](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-train) | Learn the foundational design patterns in Azure Machine Learning and train a scikit-learn model based on a diabetes data set. | [tutorial-quickstart-train-model.ipynb](create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | Regression | Scikit-Learn
| [Train an image classification model](https://docs.microsoft.com/azure/machine-learning/tutorial-train-models-with-aml) | Train a scikit-learn image classification model. | [img-classification-part1-training.ipynb](image-classification-mnist-data/img-classification-part1-training.ipynb) | Image Classification | Scikit-Learn

View File

@@ -246,7 +246,7 @@
"\n",
"ws = Workspace.from_config()\n",
"ct = ws.compute_targets['cpu-cluster']\n",
"ct.delete()"
"# ct.delete()"
]
},
{

View File

@@ -0,0 +1,482 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/tutorials/quickstart/azureml-quickstart.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial: Azure Machine Learning Quickstart\n",
"\n",
"In this tutorial, you learn how to quickly get started with Azure Machine Learning. Using a *compute instance* - a fully managed cloud-based VM that is pre-configured with the latest data science tools - you will train an image classification model using the CIFAR10 dataset.\n",
"\n",
"In this tutorial you will learn how to:\n",
"\n",
"* Create a compute instance and attach to a notebook\n",
"* Train an image classification model and log metrics\n",
"* Deploy the model\n",
"\n",
"## Prerequisites\n",
"\n",
"1. An Azure Machine Learning workspace\n",
"1. Familiar with the Python language and machine learning workflows.\n",
"\n",
"\n",
"## Create compute & attach to notebook\n",
"\n",
"To run this notebook you will need to create an Azure Machine Learning _compute instance_. The benefits of a compute instance over a local machine (e.g. laptop) or cloud VM are as follows:\n",
"\n",
"* It is a pre-configured with all the latest data science libaries (e.g. panads, scikit, TensorFlow, PyTorch) and tools (Jupyter, RStudio). In this tutorial we make extensive use of PyTorch, AzureML SDK, matplotlib and we do not need to install these components on a compute instance.\n",
"* Notebooks are seperate from the compute instance - this means that you can develop your notebook on a small VM size, and then seamlessly scale up (and/or use a GPU-enabled) the machine when needed to train a model.\n",
"* You can easily turn on/off the instance to control costs. \n",
"\n",
"To create compute, click on the + button at the top of the notebook viewer in Azure Machine Learning Studio:\n",
"\n",
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create.PNG\" width=\"500\"/>\n",
"\n",
"This will pop up the __New compute instance__ blade, provide a valid __Compute name__ (valid characters are upper and lower case letters, digits, and the - character). Then click on __Create__. \n",
"\n",
"It will take approximately 3 minutes for the compute to be ready. When the compute is ready you will see a green light next to the compute name at the top of the notebook viewer:\n",
"\n",
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create2.PNG\" width=\"500\"/>\n",
"\n",
"You will also notice that the notebook is attached to the __Python 3.6 - AzureML__ jupyter Kernel. Other kernels can be selected such as R. In addition, if you did have other instances you can switch to them by simply using the dropdown menu next to the Compute label.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Data\n",
"\n",
"For this tutorial, you will use the CIFAR10 dataset. It has the classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The images in CIFAR-10 three-channel color images of 32x32 pixels in size.\n",
"\n",
"The code cell below uses the PyTorch API to download the data to your compute instance, which should be quick (around 15 seconds). The data is divided into training and test sets.\n",
"\n",
"* **NOTE: The data is downloaded to the compute instance (in the `/tmp` directory) and not a durable cloud-based store like Azure Blob Storage or Azure Data Lake. This means if you delete the compute instance the data will be lost. The [getting started with Azure Machine Learning tutorial series](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) shows how to create an Azure Machine Learning *dataset*, which aids durability, versioning, and collaboration.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600881820920
}
},
"outputs": [],
"source": [
"import torch\n",
"import torch.optim as optim\n",
"import torchvision\n",
"import torchvision.transforms as transforms\n",
"\n",
"transform = transforms.Compose(\n",
" [transforms.ToTensor(),\n",
" transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n",
"\n",
"trainset = torchvision.datasets.CIFAR10(root='/tmp/data', train=True,\n",
" download=True, transform=transform)\n",
"trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,\n",
" shuffle=True, num_workers=2)\n",
"\n",
"testset = torchvision.datasets.CIFAR10(root='/tmp/data', train=False,\n",
" download=True, transform=transform)\n",
"testloader = torch.utils.data.DataLoader(testset, batch_size=4,\n",
" shuffle=False, num_workers=2)\n",
"\n",
"classes = ('plane', 'car', 'bird', 'cat',\n",
" 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Take a look at the data\n",
"In the following cell, you have some python code that displays the first batch of 4 CIFAR10 images:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600882160868
}
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"def imshow(img):\n",
" img = img / 2 + 0.5 # unnormalize\n",
" npimg = img.numpy()\n",
" plt.imshow(np.transpose(npimg, (1, 2, 0)))\n",
" plt.show()\n",
"\n",
"\n",
"# get some random training images\n",
"dataiter = iter(trainloader)\n",
"images, labels = dataiter.next()\n",
"\n",
"# show images\n",
"imshow(torchvision.utils.make_grid(images))\n",
"# print labels\n",
"print(' '.join('%5s' % classes[labels[j]] for j in range(4)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train model and log metrics\n",
"\n",
"In the directory `model` you will see a file called [model.py](./model/model.py) that defines the neural network architecture. The model is trained using the code below.\n",
"\n",
"* **Note: The model training take around 4 minutes to complete. The benefit of a compute instance is that the notebooks are separate from the compute - therefore you can easily switch to a different size/type of instance. For example, you could switch to run this training on a GPU-based compute instance if you had one provisioned. In the code below you can see that we have included `torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")`, which detects whether you are using a CPU or GPU machine.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600882387754
},
"tags": [
"local run"
]
},
"outputs": [],
"source": [
"from model.model import Net\n",
"from azureml.core import Experiment\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"\n",
"device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
"device\n",
"\n",
"exp = Experiment(workspace=ws, name=\"cifar10-experiment\")\n",
"run = exp.start_logging(snapshot_directory=None)\n",
"\n",
"# define convolutional network\n",
"net = Net()\n",
"net.to(device)\n",
"\n",
"# set up pytorch loss / optimizer\n",
"criterion = torch.nn.CrossEntropyLoss()\n",
"optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)\n",
"\n",
"run.log(\"learning rate\", 0.001)\n",
"run.log(\"momentum\", 0.9)\n",
"\n",
"# train the network\n",
"for epoch in range(1):\n",
" running_loss = 0.0\n",
" for i, data in enumerate(trainloader, 0):\n",
" # unpack the data\n",
" inputs, labels = data[0].to(device), data[1].to(device)\n",
"\n",
" # zero the parameter gradients\n",
" optimizer.zero_grad()\n",
"\n",
" # forward + backward + optimize\n",
" outputs = net(inputs)\n",
" loss = criterion(outputs, labels)\n",
" loss.backward()\n",
" optimizer.step()\n",
"\n",
" # print statistics\n",
" running_loss += loss.item()\n",
" if i % 2000 == 1999:\n",
" loss = running_loss / 2000\n",
" run.log(\"loss\", loss)\n",
" print(f'epoch={epoch + 1}, batch={i + 1:5}: loss {loss:.2f}')\n",
" running_loss = 0.0\n",
"\n",
"print('Finished Training')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you have executed the cell below you can view the metrics updating in real time in the Azure Machine Learning studio:\n",
"\n",
"1. Select **Experiments** (left-hand menu)\n",
"1. Select **cifar10-experiment**\n",
"1. Select **Run 1**\n",
"1. Select the **Metrics** Tab\n",
"\n",
"The metrics tab will display the following graph:\n",
"\n",
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/metrics-capture.PNG\" alt=\"dataset details\" width=\"500\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Understand the code\n",
"\n",
"The code is based on the [Pytorch 60minute Blitz](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py) where we have also added a few additional lines of code to track the loss metric as the neural network trains.\n",
"\n",
"| Code | Description | \n",
"| ------------- | ---------- |\n",
"| `experiment = Experiment( ... )` | [Experiment](https://docs.microsoft.com/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py&preserve-view=true) provides a simple way to organize multiple runs under a single name. Later you can see how experiments make it easy to compare metrics between dozens of runs. |\n",
"| `run.log()` | This will log the metrics to Azure Machine Learning. |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Version control models with the Model Registry\n",
"\n",
"You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. Azure Machine Learning supports any model that can be loaded through Python 3.\n",
"\n",
"The code below does:\n",
"\n",
"1. Saves the model on the compute instance\n",
"1. Uploads the model file to the run (if you look in the experiment on Azure Machine Learning studio you should see on the **Outputs + logs** tab the model has been saved in the run)\n",
"1. Registers the uploaded model file\n",
"1. Transitions the run to a completed state"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600888071066
},
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"from azureml.core import Model\n",
"\n",
"PATH = 'cifar_net.pth'\n",
"torch.save(net.state_dict(), PATH)\n",
"\n",
"run.upload_file(name=PATH, path_or_stream=PATH)\n",
"model = run.register_model(model_name='cifar10-model', \n",
" model_path=PATH,\n",
" model_framework=Model.Framework.PYTORCH,\n",
" description='cifar10 model')\n",
" \n",
"run.complete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View model in the model registry\n",
"\n",
"You can see the stored model by navigating to **Models** in the left-hand menu bar of Azure Machine Learning Studio. Click on the **cifar10-model** and you can see the details of the model like the experiement run id that created the model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy the model\n",
"\n",
"The next cell deploys the model to an Azure Container Instance so that you can score data in real-time (Azure Machine Learning also provides mechanisms to do batch scoring). A real-time endpoint allows application developers to integrate machine learning into their apps.\n",
"\n",
"* **Note: The deployment takes around 3 minutes to complete.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core import Environment, Model\n",
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.webservice import AciWebservice\n",
"\n",
"environment = Environment.get(ws, \"AzureML-PyTorch-1.6-CPU\")\n",
"model = Model(ws, \"cifar10-model\")\n",
"\n",
"service_name = 'cifar-service'\n",
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
"aci_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
"\n",
"service = Model.deploy(workspace=ws,\n",
" name=service_name,\n",
" models=[model],\n",
" inference_config=inference_config,\n",
" deployment_config=aci_config,\n",
" overwrite=True)\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Understand the code\n",
"\n",
"| Code | Description | \n",
"| ------------- | ---------- |\n",
"| `environment = Environment.get()` | [Environment](https://docs.microsoft.com/python/api/overview/azure/ml/?view=azure-ml-py#environment) specify the Python packages, environment variables, and software settings around your training and scoring scripts. In this case, you are using a *curated environment* that has all the packages to run PyTorch. |\n",
"| `inference_config = InferenceConfig()` | This specifies the inference (scoring) configuration for the deployment such as the script to use when scoring (see below) and on what environment. |\n",
"| `service = Model.deploy()` | Deploy the model. |\n",
"\n",
"The [*scoring script*](score.py) file is has two functions:\n",
"\n",
"1. an `init` function that executes once when the service starts - in this function you normally get the model from the registry and set global variables\n",
"1. a `run(data)` function that executes each time a call is made to the service. In this function, you normally deserialize the json, run a prediction and output the predicted result.\n",
"\n",
"\n",
"## Test the model service\n",
"\n",
"In the next cell, you get some unseen data from the test loader:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataiter = iter(testloader)\n",
"images, labels = dataiter.next()\n",
"\n",
"# print images\n",
"imshow(torchvision.utils.make_grid(images))\n",
"print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, the next cell runs scores the above images using the deployed model service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"input_payload = json.dumps({\n",
" 'data': images.tolist()\n",
"})\n",
"\n",
"output = service.run(input_payload)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up resources\n",
"\n",
"To clean up the resources after this quickstart, firstly delete the Model service using:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next stop the compute instance by following these steps:\n",
"\n",
"1. Go to **Compute** in the left-hand menu of the Azure Machine Learning studio\n",
"1. Select your compute instance\n",
"1. Select **Stop**\n",
"\n",
"\n",
"**Important: The resources you created can be used as prerequisites to other Azure Machine Learning tutorials and how-to articles.** If you don't plan to use the resources you created, delete them, so you don't incur any charges:\n",
"\n",
"1. In the Azure portal, select **Resource groups** on the far left.\n",
"1. From the list, select the resource group you created.\n",
"1. Select **Delete resource group**.\n",
"1. Enter the resource group name. Then select **Delete**.\n",
"\n",
"You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next Steps\n",
"\n",
"In this tutorial, you have seen how to run your machine learning code on a fully managed, pre-configured cloud-based VM called a *compute instance*. Having a compute instance for your development environment removes the burden of installing data science tooling and libraries (for example, Jupyter, PyTorch, TensorFlow, Scikit) and allows you to easily scale up/down the compute power (RAM, cores) since the notebooks are separated from the VM. \n",
"\n",
"It is often the case that once you have your machine learning code working in a development environment that you want to productionize this by running as a **_job_** - ideally on a schedule or trigger (for example, arrival of new data). To this end, we recommend that you follow [**the day 1 getting started with Azure Machine Learning tutorial**](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local). This day 1 tutorial is focussed on running jobs-based machine learning code in the cloud."
]
}
],
"metadata": {
"authors": [
{
"name": "samkemp"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"nteract": {
"version": "nteract-front-end@1.0.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,7 @@
name: azureml-quickstart
dependencies:
- pip:
- azureml-sdk
- pytorch
- torchvision
- matplotlib

View File

@@ -0,0 +1,22 @@
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x

View File

@@ -306,7 +306,7 @@
"\n",
"|Property| Value in this tutorial |Description|\n",
"|----|----|---|\n",
"|**iteration_timeout_minutes**|2|Time limit in minutes for each iteration. Reduce this value to decrease total runtime.|\n",
"|**iteration_timeout_minutes**|10|Time limit in minutes for each iteration. Increase this value for larger datasets that need more time for each iteration.|\n",
"|**experiment_timeout_hours**|0.3|Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n",
"|**enable_early_stopping**|True|Flag to enable early termination if the score is not improving in the short term.|\n",
"|**primary_metric**| spearman_correlation | Metric that you want to optimize. The best-fit model will be chosen based on this metric.|\n",
@@ -324,7 +324,7 @@
"import logging\n",
"\n",
"automl_settings = {\n",
" \"iteration_timeout_minutes\": 2,\n",
" \"iteration_timeout_minutes\": 10,\n",
" \"experiment_timeout_hours\": 0.3,\n",
" \"enable_early_stopping\": True,\n",
" \"primary_metric\": 'spearman_correlation',\n",