mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 09:37:04 -05:00
Compare commits
28 Commits
lostmygith
...
azureml-sd
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1d05efaac2 | ||
|
|
3adebd1127 | ||
|
|
a6817063df | ||
|
|
a79f8c254a | ||
|
|
fb4f287458 | ||
|
|
41366a4af0 | ||
|
|
74deb14fac | ||
|
|
4ed1d445ae | ||
|
|
b5c15db0b4 | ||
|
|
91d43bade6 | ||
|
|
bd750f5817 | ||
|
|
637bcc5973 | ||
|
|
ba741fb18d | ||
|
|
ac0ad8d487 | ||
|
|
5019ad6c5a | ||
|
|
41a2ebd2b3 | ||
|
|
53e3283d1d | ||
|
|
ba9c4c5465 | ||
|
|
a6c65f00ec | ||
|
|
95072eabc2 | ||
|
|
12905ef254 | ||
|
|
4cf56eee91 | ||
|
|
d345ff6c37 | ||
|
|
560dcac0a0 | ||
|
|
322087a58c | ||
|
|
e255c000ab | ||
|
|
7871e37ec0 | ||
|
|
58e584e7eb |
@@ -28,7 +28,7 @@ git clone https://github.com/Azure/MachineLearningNotebooks.git
|
|||||||
pip install azureml-sdk[notebooks,tensorboard]
|
pip install azureml-sdk[notebooks,tensorboard]
|
||||||
|
|
||||||
# install model explainability component
|
# install model explainability component
|
||||||
pip install azureml-sdk[explain]
|
pip install azureml-sdk[interpret]
|
||||||
|
|
||||||
# install automated ml components
|
# install automated ml components
|
||||||
pip install azureml-sdk[automl]
|
pip install azureml-sdk[automl]
|
||||||
@@ -86,7 +86,7 @@ If you need additional Azure ML SDK components, you can either modify the Docker
|
|||||||
pip install azureml-sdk[automl]
|
pip install azureml-sdk[automl]
|
||||||
|
|
||||||
# install the core SDK and model explainability component
|
# install the core SDK and model explainability component
|
||||||
pip install azureml-sdk[explain]
|
pip install azureml-sdk[interpret]
|
||||||
|
|
||||||
# install the core SDK and experimental components
|
# install the core SDK and experimental components
|
||||||
pip install azureml-sdk[contrib]
|
pip install azureml-sdk[contrib]
|
||||||
|
|||||||
16
README.md
16
README.md
@@ -1,6 +1,8 @@
|
|||||||
# Azure Machine Learning service example notebooks
|
# Azure Machine Learning service example notebooks
|
||||||
|
|
||||||
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
|
> a community-driven repository of examples using mlflow for tracking can be found at https://github.com/Azure/azureml-examples
|
||||||
|
|
||||||
|
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -18,10 +20,10 @@ This [index](./index.md) should assist in navigating the Azure Machine Learning
|
|||||||
If you want to...
|
If you want to...
|
||||||
|
|
||||||
* ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb).
|
* ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb).
|
||||||
* ...learn about experimentation and tracking run history, first [train within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) and [using logging APIs](./how-to-use-azureml/training/logging-api/logging-api.ipynb).
|
* ...learn about experimentation and tracking run history: [track and monitor experiments](./how-to-use-azureml/track-and-monitor-experiments).
|
||||||
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
|
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
|
||||||
* ...deploy models as a realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
|
* ...deploy models as a realtime scoring service, first learn the basics by [deploying to Azure Container Instance](./how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.ipynb), then learn how to [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
|
||||||
* ...deploy models as a batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring).
|
* ...deploy models as a batch scoring service: [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb) and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring).
|
||||||
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb).
|
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb).
|
||||||
|
|
||||||
## Tutorials
|
## Tutorials
|
||||||
@@ -33,13 +35,12 @@ The [Tutorials](./tutorials) folder contains notebooks for the tutorials describ
|
|||||||
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
|
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
|
||||||
|
|
||||||
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
|
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
|
||||||
- [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
|
- [Training with ML and DL frameworks](./how-to-use-azureml/ml-frameworks) - Examples demonstrating how to build and train machine learning models at scale on Azure ML and perform hyperparameter tuning.
|
||||||
- [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
|
- [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
|
||||||
- [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
|
- [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
|
||||||
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
|
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
|
||||||
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
|
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
|
||||||
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
|
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
|
||||||
- [Monitor Models](./how-to-use-azureml/monitor-models) - Examples showing how to enable model monitoring services such as DataDrift
|
|
||||||
- [Reinforcement Learning](./how-to-use-azureml/reinforcement-learning) - Examples showing how to train reinforcement learning agents
|
- [Reinforcement Learning](./how-to-use-azureml/reinforcement-learning) - Examples showing how to train reinforcement learning agents
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -58,7 +59,6 @@ Visit this [community repository](https://github.com/microsoft/MLOps/tree/master
|
|||||||
## Projects using Azure Machine Learning
|
## Projects using Azure Machine Learning
|
||||||
|
|
||||||
Visit following repos to see projects contributed by Azure ML users:
|
Visit following repos to see projects contributed by Azure ML users:
|
||||||
- [AMLSamples](https://github.com/Azure/AMLSamples) Number of end-to-end examples, including face recognition, predictive maintenance, customer churn and sentiment analysis.
|
|
||||||
- [Learn about Natural Language Processing best practices using Azure Machine Learning service](https://github.com/microsoft/nlp)
|
- [Learn about Natural Language Processing best practices using Azure Machine Learning service](https://github.com/microsoft/nlp)
|
||||||
- [Pre-Train BERT models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
|
- [Pre-Train BERT models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
|
||||||
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
|
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
|
||||||
|
|||||||
@@ -103,7 +103,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -38,7 +38,7 @@
|
|||||||
"## Introduction\n",
|
"## Introduction\n",
|
||||||
"This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).\n",
|
"This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We will apply the [grid search algorithm](https://fairlearn.github.io/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n",
|
"We will apply the [grid search algorithm](https://fairlearn.github.io/master/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Setup\n",
|
"### Setup\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -46,7 +46,7 @@
|
|||||||
"Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.\n",
|
"Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.\n",
|
||||||
"This notebook also requires the following packages:\n",
|
"This notebook also requires the following packages:\n",
|
||||||
"* `azureml-contrib-fairness`\n",
|
"* `azureml-contrib-fairness`\n",
|
||||||
"* `fairlearn==0.4.6`\n",
|
"* `fairlearn==0.4.6` (v0.5.0 will work with minor modifications)\n",
|
||||||
"* `joblib`\n",
|
"* `joblib`\n",
|
||||||
"* `shap`\n",
|
"* `shap`\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -62,13 +62,20 @@
|
|||||||
"# !pip install --upgrade scikit-learn>=0.22.1"
|
"# !pip install --upgrade scikit-learn>=0.22.1"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"<a id=\"LoadingData\"></a>\n",
|
"<a id=\"LoadingData\"></a>\n",
|
||||||
"## Loading the Data\n",
|
"## Loading the Data\n",
|
||||||
"We use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:"
|
"We use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -79,9 +86,16 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate\n",
|
"from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate\n",
|
||||||
"from fairlearn.widget import FairlearnDashboard\n",
|
"from fairlearn.widget import FairlearnDashboard\n",
|
||||||
"from sklearn import svm\n",
|
"\n",
|
||||||
"from sklearn.preprocessing import LabelEncoder, StandardScaler\n",
|
"from sklearn.compose import ColumnTransformer\n",
|
||||||
|
"from sklearn.datasets import fetch_openml\n",
|
||||||
|
"from sklearn.impute import SimpleImputer\n",
|
||||||
"from sklearn.linear_model import LogisticRegression\n",
|
"from sklearn.linear_model import LogisticRegression\n",
|
||||||
|
"from sklearn.model_selection import train_test_split\n",
|
||||||
|
"from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
|
||||||
|
"from sklearn.compose import make_column_selector as selector\n",
|
||||||
|
"from sklearn.pipeline import Pipeline\n",
|
||||||
|
"\n",
|
||||||
"import pandas as pd"
|
"import pandas as pd"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -89,7 +103,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"We can now load and inspect the data from the `shap` package:"
|
"We can now load and inspect the data:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -98,10 +112,13 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from sklearn.datasets import fetch_openml\n",
|
"from fairness_nb_utils import fetch_openml_with_retries\n",
|
||||||
"data = fetch_openml(data_id=1590, as_frame=True)\n",
|
"\n",
|
||||||
|
"data = fetch_openml_with_retries(data_id=1590)\n",
|
||||||
|
" \n",
|
||||||
|
"# Extract the items we want\n",
|
||||||
"X_raw = data.data\n",
|
"X_raw = data.data\n",
|
||||||
"Y = (data.target == '>50K') * 1\n",
|
"y = (data.target == '>50K') * 1\n",
|
||||||
"\n",
|
"\n",
|
||||||
"X_raw[\"race\"].value_counts().to_dict()"
|
"X_raw[\"race\"].value_counts().to_dict()"
|
||||||
]
|
]
|
||||||
@@ -110,7 +127,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). We also separate out the Race column, but we will not perform any mitigation based on it. Finally, we perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms"
|
"We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -120,23 +137,14 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"A = X_raw[['sex','race']]\n",
|
"A = X_raw[['sex','race']]\n",
|
||||||
"X = X_raw.drop(labels=['sex', 'race'],axis = 1)\n",
|
"X_raw = X_raw.drop(labels=['sex', 'race'],axis = 1)"
|
||||||
"X_dummies = pd.get_dummies(X)\n",
|
|
||||||
"\n",
|
|
||||||
"sc = StandardScaler()\n",
|
|
||||||
"X_scaled = sc.fit_transform(X_dummies)\n",
|
|
||||||
"X_scaled = pd.DataFrame(X_scaled, columns=X_dummies.columns)\n",
|
|
||||||
"\n",
|
|
||||||
"\n",
|
|
||||||
"le = LabelEncoder()\n",
|
|
||||||
"Y = le.fit_transform(Y)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"With our data prepared, we can make the conventional split in to 'test' and 'train' subsets:"
|
"We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -145,21 +153,76 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from sklearn.model_selection import train_test_split\n",
|
"(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(\n",
|
||||||
"X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_scaled, \n",
|
" X_raw, y, A, test_size=0.3, random_state=12345, stratify=y\n",
|
||||||
" Y, \n",
|
")\n",
|
||||||
" A,\n",
|
"\n",
|
||||||
" test_size = 0.2,\n",
|
"# Ensure indices are aligned between X, y and A,\n",
|
||||||
" random_state=0,\n",
|
"# after all the slicing and splitting of DataFrames\n",
|
||||||
" stratify=Y)\n",
|
"# and Series\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Work around indexing issue\n",
|
|
||||||
"X_train = X_train.reset_index(drop=True)\n",
|
"X_train = X_train.reset_index(drop=True)\n",
|
||||||
"A_train = A_train.reset_index(drop=True)\n",
|
|
||||||
"X_test = X_test.reset_index(drop=True)\n",
|
"X_test = X_test.reset_index(drop=True)\n",
|
||||||
|
"y_train = y_train.reset_index(drop=True)\n",
|
||||||
|
"y_test = y_test.reset_index(drop=True)\n",
|
||||||
|
"A_train = A_train.reset_index(drop=True)\n",
|
||||||
"A_test = A_test.reset_index(drop=True)"
|
"A_test = A_test.reset_index(drop=True)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).\n",
|
||||||
|
"\n",
|
||||||
|
"For this preprocessing, we make use of `Pipeline` objects from `sklearn`:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"numeric_transformer = Pipeline(\n",
|
||||||
|
" steps=[\n",
|
||||||
|
" (\"impute\", SimpleImputer()),\n",
|
||||||
|
" (\"scaler\", StandardScaler()),\n",
|
||||||
|
" ]\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"categorical_transformer = Pipeline(\n",
|
||||||
|
" [\n",
|
||||||
|
" (\"impute\", SimpleImputer(strategy=\"most_frequent\")),\n",
|
||||||
|
" (\"ohe\", OneHotEncoder(handle_unknown=\"ignore\", sparse=False)),\n",
|
||||||
|
" ]\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"preprocessor = ColumnTransformer(\n",
|
||||||
|
" transformers=[\n",
|
||||||
|
" (\"num\", numeric_transformer, selector(dtype_exclude=\"category\")),\n",
|
||||||
|
" (\"cat\", categorical_transformer, selector(dtype_include=\"category\")),\n",
|
||||||
|
" ]\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"X_train = preprocessor.fit_transform(X_train)\n",
|
||||||
|
"X_test = preprocessor.transform(X_test)"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -178,7 +241,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)\n",
|
"unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"unmitigated_predictor.fit(X_train, Y_train)"
|
"unmitigated_predictor.fit(X_train, y_train)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -195,7 +258,7 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],\n",
|
"FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],\n",
|
||||||
" y_true=Y_test,\n",
|
" y_true=y_test,\n",
|
||||||
" y_pred={\"unmitigated\": unmitigated_predictor.predict(X_test)})"
|
" y_pred={\"unmitigated\": unmitigated_predictor.predict(X_test)})"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -246,9 +309,10 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"sweep.fit(X_train, Y_train,\n",
|
"sweep.fit(X_train, y_train,\n",
|
||||||
" sensitive_features=A_train.sex)\n",
|
" sensitive_features=A_train.sex)\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"# For Fairlearn v0.5.0, need sweep.predictors_\n",
|
||||||
"predictors = sweep._predictors"
|
"predictors = sweep._predictors"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -270,9 +334,9 @@
|
|||||||
" classifier = lambda X: m.predict(X)\n",
|
" classifier = lambda X: m.predict(X)\n",
|
||||||
" \n",
|
" \n",
|
||||||
" error = ErrorRate()\n",
|
" error = ErrorRate()\n",
|
||||||
" error.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.sex)\n",
|
" error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)\n",
|
||||||
" disparity = DemographicParity()\n",
|
" disparity = DemographicParity()\n",
|
||||||
" disparity.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.sex)\n",
|
" disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)\n",
|
||||||
" \n",
|
" \n",
|
||||||
" errors.append(error.gamma(classifier)[0])\n",
|
" errors.append(error.gamma(classifier)[0])\n",
|
||||||
" disparities.append(disparity.gamma(classifier).max())\n",
|
" disparities.append(disparity.gamma(classifier).max())\n",
|
||||||
@@ -326,7 +390,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"FairlearnDashboard(sensitive_features=A_test, \n",
|
"FairlearnDashboard(sensitive_features=A_test, \n",
|
||||||
" sensitive_feature_names=['Sex', 'Race'],\n",
|
" sensitive_feature_names=['Sex', 'Race'],\n",
|
||||||
" y_true=Y_test.tolist(),\n",
|
" y_true=y_test.tolist(),\n",
|
||||||
" y_pred=predictions_dominant)"
|
" y_pred=predictions_dominant)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -334,7 +398,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"When using sex as the sensitive feature, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute \"sex\"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.\n",
|
"When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute \"sex\"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints."
|
"By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints."
|
||||||
]
|
]
|
||||||
@@ -441,7 +505,7 @@
|
|||||||
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
|
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"dash_dict = _create_group_metric_set(y_true=Y_test,\n",
|
"dash_dict = _create_group_metric_set(y_true=y_test,\n",
|
||||||
" predictions=predictions_dominant_ids,\n",
|
" predictions=predictions_dominant_ids,\n",
|
||||||
" sensitive_features=sf,\n",
|
" sensitive_features=sf,\n",
|
||||||
" prediction_type='binary_classification')"
|
" prediction_type='binary_classification')"
|
||||||
|
|||||||
28
contrib/fairness/fairness_nb_utils.py
Normal file
28
contrib/fairness/fairness_nb_utils.py
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
# ---------------------------------------------------------
|
||||||
|
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||||
|
# ---------------------------------------------------------
|
||||||
|
|
||||||
|
"""Utilities for azureml-contrib-fairness notebooks."""
|
||||||
|
|
||||||
|
from sklearn.datasets import fetch_openml
|
||||||
|
import time
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_openml_with_retries(data_id, max_retries=4, retry_delay=60):
|
||||||
|
"""Fetch a given dataset from OpenML with retries as specified."""
|
||||||
|
for i in range(max_retries):
|
||||||
|
try:
|
||||||
|
print("Download attempt {0} of {1}".format(i + 1, max_retries))
|
||||||
|
data = fetch_openml(data_id=data_id, as_frame=True)
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
print("Download attempt failed with exception:")
|
||||||
|
print(e)
|
||||||
|
if i + 1 != max_retries:
|
||||||
|
print("Will retry after {0} seconds".format(retry_delay))
|
||||||
|
time.sleep(retry_delay)
|
||||||
|
retry_delay = retry_delay * 2
|
||||||
|
else:
|
||||||
|
raise RuntimeError("Unable to download dataset from OpenML")
|
||||||
|
|
||||||
|
return data
|
||||||
@@ -48,7 +48,7 @@
|
|||||||
"Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.\n",
|
"Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.\n",
|
||||||
"This notebook also requires the following packages:\n",
|
"This notebook also requires the following packages:\n",
|
||||||
"* `azureml-contrib-fairness`\n",
|
"* `azureml-contrib-fairness`\n",
|
||||||
"* `fairlearn==0.4.6`\n",
|
"* `fairlearn==0.4.6` (should also work with v0.5.0)\n",
|
||||||
"* `joblib`\n",
|
"* `joblib`\n",
|
||||||
"* `shap`\n",
|
"* `shap`\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -64,13 +64,20 @@
|
|||||||
"# !pip install --upgrade scikit-learn>=0.22.1"
|
"# !pip install --upgrade scikit-learn>=0.22.1"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"<a id=\"LoadingData\"></a>\n",
|
"<a id=\"LoadingData\"></a>\n",
|
||||||
"## Loading the Data\n",
|
"## Loading the Data\n",
|
||||||
"We use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:"
|
"We use the well-known `adult` census dataset, which we fetch from the OpenML website. We start with a fairly unremarkable set of imports:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -80,9 +87,14 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from sklearn import svm\n",
|
"from sklearn import svm\n",
|
||||||
"from sklearn.preprocessing import LabelEncoder, StandardScaler\n",
|
"from sklearn.compose import ColumnTransformer\n",
|
||||||
|
"from sklearn.datasets import fetch_openml\n",
|
||||||
|
"from sklearn.impute import SimpleImputer\n",
|
||||||
"from sklearn.linear_model import LogisticRegression\n",
|
"from sklearn.linear_model import LogisticRegression\n",
|
||||||
"import pandas as pd"
|
"from sklearn.model_selection import train_test_split\n",
|
||||||
|
"from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
|
||||||
|
"from sklearn.compose import make_column_selector as selector\n",
|
||||||
|
"from sklearn.pipeline import Pipeline"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -98,10 +110,13 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from sklearn.datasets import fetch_openml\n",
|
"from fairness_nb_utils import fetch_openml_with_retries\n",
|
||||||
"data = fetch_openml(data_id=1590, as_frame=True)\n",
|
"\n",
|
||||||
|
"data = fetch_openml_with_retries(data_id=1590)\n",
|
||||||
|
" \n",
|
||||||
|
"# Extract the items we want\n",
|
||||||
"X_raw = data.data\n",
|
"X_raw = data.data\n",
|
||||||
"Y = (data.target == '>50K') * 1"
|
"y = (data.target == '>50K') * 1"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -127,7 +142,7 @@
|
|||||||
"<a id=\"ProcessingData\"></a>\n",
|
"<a id=\"ProcessingData\"></a>\n",
|
||||||
"## Processing the Data\n",
|
"## Processing the Data\n",
|
||||||
"\n",
|
"\n",
|
||||||
"With the data loaded, we process it for our needs. First, we extract the sensitive features of interest into `A` (conventionally used in the literature) and put the rest of the feature data into `X`:"
|
"With the data loaded, we process it for our needs. First, we extract the sensitive features of interest into `A` (conventionally used in the literature) and leave the rest of the feature data in `X_raw`:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -137,15 +152,14 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"A = X_raw[['sex','race']]\n",
|
"A = X_raw[['sex','race']]\n",
|
||||||
"X = X_raw.drop(labels=['sex', 'race'],axis = 1)\n",
|
"X_raw = X_raw.drop(labels=['sex', 'race'],axis = 1)"
|
||||||
"X_dummies = pd.get_dummies(X)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Next, we apply a standard set of scalings:"
|
"We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -154,42 +168,76 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"sc = StandardScaler()\n",
|
"(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(\n",
|
||||||
"X_scaled = sc.fit_transform(X_dummies)\n",
|
" X_raw, y, A, test_size=0.3, random_state=12345, stratify=y\n",
|
||||||
"X_scaled = pd.DataFrame(X_scaled, columns=X_dummies.columns)\n",
|
")\n",
|
||||||
"\n",
|
"\n",
|
||||||
"le = LabelEncoder()\n",
|
"# Ensure indices are aligned between X, y and A,\n",
|
||||||
"Y = le.fit_transform(Y)"
|
"# after all the slicing and splitting of DataFrames\n",
|
||||||
]
|
"# and Series\n",
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Finally, we can then split our data into training and test sets, and also make the labels on our test portion of `A` human-readable:"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from sklearn.model_selection import train_test_split\n",
|
|
||||||
"X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_scaled, \n",
|
|
||||||
" Y, \n",
|
|
||||||
" A,\n",
|
|
||||||
" test_size = 0.2,\n",
|
|
||||||
" random_state=0,\n",
|
|
||||||
" stratify=Y)\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"# Work around indexing issue\n",
|
|
||||||
"X_train = X_train.reset_index(drop=True)\n",
|
"X_train = X_train.reset_index(drop=True)\n",
|
||||||
"A_train = A_train.reset_index(drop=True)\n",
|
|
||||||
"X_test = X_test.reset_index(drop=True)\n",
|
"X_test = X_test.reset_index(drop=True)\n",
|
||||||
|
"y_train = y_train.reset_index(drop=True)\n",
|
||||||
|
"y_test = y_test.reset_index(drop=True)\n",
|
||||||
|
"A_train = A_train.reset_index(drop=True)\n",
|
||||||
"A_test = A_test.reset_index(drop=True)"
|
"A_test = A_test.reset_index(drop=True)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).\n",
|
||||||
|
"\n",
|
||||||
|
"For this preprocessing, we make use of `Pipeline` objects from `sklearn`:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"numeric_transformer = Pipeline(\n",
|
||||||
|
" steps=[\n",
|
||||||
|
" (\"impute\", SimpleImputer()),\n",
|
||||||
|
" (\"scaler\", StandardScaler()),\n",
|
||||||
|
" ]\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"categorical_transformer = Pipeline(\n",
|
||||||
|
" [\n",
|
||||||
|
" (\"impute\", SimpleImputer(strategy=\"most_frequent\")),\n",
|
||||||
|
" (\"ohe\", OneHotEncoder(handle_unknown=\"ignore\", sparse=False)),\n",
|
||||||
|
" ]\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"preprocessor = ColumnTransformer(\n",
|
||||||
|
" transformers=[\n",
|
||||||
|
" (\"num\", numeric_transformer, selector(dtype_exclude=\"category\")),\n",
|
||||||
|
" (\"cat\", categorical_transformer, selector(dtype_include=\"category\")),\n",
|
||||||
|
" ]\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"X_train = preprocessor.fit_transform(X_train)\n",
|
||||||
|
"X_test = preprocessor.transform(X_test)"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -208,7 +256,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"lr_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)\n",
|
"lr_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"lr_predictor.fit(X_train, Y_train)"
|
"lr_predictor.fit(X_train, y_train)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -226,7 +274,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"svm_predictor = svm.SVC()\n",
|
"svm_predictor = svm.SVC()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"svm_predictor.fit(X_train, Y_train)"
|
"svm_predictor.fit(X_train, y_train)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -345,7 +393,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"FairlearnDashboard(sensitive_features=A_test, \n",
|
"FairlearnDashboard(sensitive_features=A_test, \n",
|
||||||
" sensitive_feature_names=['Sex', 'Race'],\n",
|
" sensitive_feature_names=['Sex', 'Race'],\n",
|
||||||
" y_true=Y_test.tolist(),\n",
|
" y_true=y_test.tolist(),\n",
|
||||||
" y_pred=ys_pred)"
|
" y_pred=ys_pred)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -375,7 +423,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
|
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
|
||||||
"\n",
|
"\n",
|
||||||
"dash_dict = _create_group_metric_set(y_true=Y_test,\n",
|
"dash_dict = _create_group_metric_set(y_true=y_test,\n",
|
||||||
" predictions=ys_pred,\n",
|
" predictions=ys_pred,\n",
|
||||||
" sensitive_features=sf,\n",
|
" sensitive_features=sf,\n",
|
||||||
" prediction_type='binary_classification')"
|
" prediction_type='binary_classification')"
|
||||||
|
|||||||
@@ -106,52 +106,87 @@ jupyter notebook
|
|||||||
<a name="samples"></a>
|
<a name="samples"></a>
|
||||||
# Automated ML SDK Sample Notebooks
|
# Automated ML SDK Sample Notebooks
|
||||||
|
|
||||||
- [auto-ml-classification-credit-card-fraud.ipynb](classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb)
|
## Classification
|
||||||
- Dataset: Kaggle's [credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
|
- **Classify Credit Card Fraud**
|
||||||
- Simple example of using automated ML for classification to fraudulent credit card transactions
|
- Dataset: [Kaggle's credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
|
||||||
- Uses azure compute for training
|
- **[Jupyter Notebook (remote run)](classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb)**
|
||||||
|
- run the experiment remotely on AML Compute cluster
|
||||||
|
- test the performance of the best model in the local environment
|
||||||
|
- **[Jupyter Notebook (local run)](local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb)**
|
||||||
|
- run experiment in the local environment
|
||||||
|
- use Mimic Explainer for computing feature importance
|
||||||
|
- deploy the best model along with the explainer to an Azure Kubernetes (AKS) cluster, which will compute the raw and engineered feature importances at inference time
|
||||||
|
- **Predict Term Deposit Subscriptions in a Bank**
|
||||||
|
- Dataset: [UCI's bank marketing dataset](https://www.kaggle.com/janiobachmann/bank-marketing-dataset)
|
||||||
|
- **[Jupyter Notebook](classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb)**
|
||||||
|
- run experiment remotely on AML Compute cluster to generate ONNX compatible models
|
||||||
|
- view the featurization steps that were applied during training
|
||||||
|
- view feature importance for the best model
|
||||||
|
- download the best model in ONNX format and use it for inferencing using ONNXRuntime
|
||||||
|
- deploy the best model in PKL format to Azure Container Instance (ACI)
|
||||||
|
- **Predict Newsgroup based on Text from News Article**
|
||||||
|
- Dataset: [20 newsgroups text dataset](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html)
|
||||||
|
- **[Jupyter Notebook](classification-text-dnn/auto-ml-classification-text-dnn.ipynb)**
|
||||||
|
- AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data
|
||||||
|
- AutoML will use Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used
|
||||||
|
- Bidirectional Long-Short Term neural network (BiLSTM) will be utilized when a CPU compute is used, thereby optimizing the choice of DNN
|
||||||
|
|
||||||
- [auto-ml-regression.ipynb](regression/auto-ml-regression.ipynb)
|
## Regression
|
||||||
|
- **Predict Performance of Hardware Parts**
|
||||||
- Dataset: Hardware Performance Dataset
|
- Dataset: Hardware Performance Dataset
|
||||||
- Simple example of using automated ML for regression
|
- **[Jupyter Notebook](regression/auto-ml-regression.ipynb)**
|
||||||
- Uses azure compute for training
|
- run the experiment remotely on AML Compute cluster
|
||||||
|
- get best trained model for a different metric than the one the experiment was optimized for
|
||||||
|
- test the performance of the best model in the local environment
|
||||||
|
- **[Jupyter Notebook (advanced)](regression/auto-ml-regression.ipynb)**
|
||||||
|
- run the experiment remotely on AML Compute cluster
|
||||||
|
- customize featurization: override column purpose within the dataset, configure transformer parameters
|
||||||
|
- get best trained model for a different metric than the one the experiment was optimized for
|
||||||
|
- run a model explanation experiment on the remote cluster
|
||||||
|
- deploy the model along the explainer and run online inferencing
|
||||||
|
|
||||||
- [auto-ml-regression-explanation-featurization.ipynb](regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb)
|
## Time Series Forecasting
|
||||||
- Dataset: Hardware Performance Dataset
|
- **Forecast Energy Demand**
|
||||||
- Shows featurization and excplanation
|
- Dataset: [NYC energy demand data](http://mis.nyiso.com/public/P-58Blist.htm)
|
||||||
- Uses azure compute for training
|
- **[Jupyter Notebook](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)**
|
||||||
|
- run experiment remotely on AML Compute cluster
|
||||||
- [auto-ml-forecasting-energy-demand.ipynb](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
|
- use lags and rolling window features
|
||||||
- Dataset: [NYC energy demand data](forecasting-a/nyc_energy.csv)
|
- view the featurization steps that were applied during training
|
||||||
- Example of using automated ML for training a forecasting model
|
- get the best model, use it to forecast on test data and compare the accuracy of predictions against real data
|
||||||
|
- **Forecast Orange Juice Sales (Multi-Series)**
|
||||||
- [auto-ml-classification-credit-card-fraud-local.ipynb](local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb)
|
- Dataset: [Dominick's grocery sales of orange juice](forecasting-orange-juice-sales/dominicks_OJ.csv)
|
||||||
- Dataset: Kaggle's [credit card fraud detection dataset](https://www.kaggle.com/mlg-ulb/creditcardfraud)
|
- **[Jupyter Notebook](forecasting-orange-juice-sales/dominicks_OJ.csv)**
|
||||||
- Simple example of using automated ML for classification to fraudulent credit card transactions
|
- run experiment remotely on AML Compute cluster
|
||||||
- Uses local compute for training
|
- customize time-series featurization, change column purpose and override transformer hyper parameters
|
||||||
|
- evaluate locally the performance of the generated best model
|
||||||
- [auto-ml-classification-bank-marketing-all-features.ipynb](classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb)
|
- deploy the best model as a webservice on Azure Container Instance (ACI)
|
||||||
- Dataset: UCI's [bank marketing dataset](https://www.kaggle.com/janiobachmann/bank-marketing-dataset)
|
- get online predictions from the deployed model
|
||||||
- Simple example of using automated ML for classification to predict term deposit subscriptions for a bank
|
- **Forecast Demand of a Bike-Sharing Service**
|
||||||
- Uses azure compute for training
|
- Dataset: [Bike demand data](forecasting-bike-share/bike-no.csv)
|
||||||
|
- **[Jupyter Notebook](forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)**
|
||||||
- [auto-ml-forecasting-orange-juice-sales.ipynb](forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb)
|
- run experiment remotely on AML Compute cluster
|
||||||
- Dataset: [Dominick's grocery sales of orange juice](forecasting-b/dominicks_OJ.csv)
|
- integrate holiday features
|
||||||
- Example of training an automated ML forecasting model on multiple time-series
|
- run rolling forecast for test set that is longer than the forecast horizon
|
||||||
|
- compute metrics on the predictions from the remote forecast
|
||||||
- [auto-ml-forecasting-bike-share.ipynb](forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
|
- **The Forecast Function Interface**
|
||||||
- Dataset: forecasting for a bike-sharing
|
- Dataset: Generated for sample purposes
|
||||||
- Example of training an automated ML forecasting model on multiple time-series
|
- **[Jupyter Notebook](forecasting-forecast-function/auto-ml-forecasting-function.ipynb)**
|
||||||
|
- train a forecaster using a remote AML Compute cluster
|
||||||
- [auto-ml-forecasting-function.ipynb](forecasting-forecast-function/auto-ml-forecasting-function.ipynb)
|
- capabilities of forecast function (e.g. forecast farther into the horizon)
|
||||||
- Example of training an automated ML forecasting model on multiple time-series
|
- generate confidence intervals
|
||||||
|
- **Forecast Beverage Production**
|
||||||
- [auto-ml-forecasting-beer-remote.ipynb](forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb)
|
- Dataset: [Monthly beer production data](forecasting-beer-remote/Beer_no_valid_split_train.csv)
|
||||||
- Example of training an automated ML forecasting model on multiple time-series
|
- **[Jupyter Notebook](forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb)**
|
||||||
- Beer Production Forecasting
|
- train using a remote AML Compute cluster
|
||||||
|
- enable the DNN learning model
|
||||||
- [auto-ml-continuous-retraining.ipynb](continuous-retraining/auto-ml-continuous-retraining.ipynb)
|
- forecast on a remote compute cluster and compare different model performance
|
||||||
- Continuous retraining using Pipelines and Time-Series TabularDataset
|
- **Continuous Retraining with NOAA Weather Data**
|
||||||
|
- Dataset: [NOAA weather data from Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/)
|
||||||
|
- **[Jupyter Notebook](continuous-retraining/auto-ml-continuous-retraining.ipynb)**
|
||||||
|
- continuously retrain a model using Pipelines and AutoML
|
||||||
|
- create a Pipeline to upload a time series dataset to an Azure blob
|
||||||
|
- create a Pipeline to run an AutoML experiment and register the best resulting model in the Workspace
|
||||||
|
- publish the training pipeline created and schedule it to run daily
|
||||||
|
|
||||||
<a name="documentation"></a>
|
<a name="documentation"></a>
|
||||||
See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments.
|
See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments.
|
||||||
|
|||||||
@@ -2,9 +2,10 @@ name: azure_automl
|
|||||||
dependencies:
|
dependencies:
|
||||||
# The python interpreter version.
|
# The python interpreter version.
|
||||||
# Currently Azure ML only supports 3.5.2 and later.
|
# Currently Azure ML only supports 3.5.2 and later.
|
||||||
- pip<=19.3.1
|
- pip==20.2.4
|
||||||
- python>=3.5.2,<3.6.8
|
- python>=3.5.2,<3.8
|
||||||
- nb_conda
|
- nb_conda
|
||||||
|
- boto3==1.15.18
|
||||||
- matplotlib==2.1.0
|
- matplotlib==2.1.0
|
||||||
- numpy==1.18.5
|
- numpy==1.18.5
|
||||||
- cython
|
- cython
|
||||||
@@ -20,9 +21,8 @@ dependencies:
|
|||||||
|
|
||||||
- pip:
|
- pip:
|
||||||
# Required packages for AzureML execution, history, and data preparation.
|
# Required packages for AzureML execution, history, and data preparation.
|
||||||
- azureml-widgets
|
- azureml-widgets~=1.20.0
|
||||||
- pytorch-transformers==1.0.0
|
- pytorch-transformers==1.0.0
|
||||||
- spacy==2.1.8
|
- spacy==2.1.8
|
||||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||||
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_win32_requirements.txt [--no-deps]
|
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.20.0/validated_win32_requirements.txt [--no-deps]
|
||||||
|
|
||||||
|
|||||||
@@ -2,9 +2,10 @@ name: azure_automl
|
|||||||
dependencies:
|
dependencies:
|
||||||
# The python interpreter version.
|
# The python interpreter version.
|
||||||
# Currently Azure ML only supports 3.5.2 and later.
|
# Currently Azure ML only supports 3.5.2 and later.
|
||||||
- pip<=19.3.1
|
- pip==20.2.4
|
||||||
- python>=3.5.2,<3.6.8
|
- python>=3.5.2,<3.8
|
||||||
- nb_conda
|
- nb_conda
|
||||||
|
- boto3==1.15.18
|
||||||
- matplotlib==2.1.0
|
- matplotlib==2.1.0
|
||||||
- numpy==1.18.5
|
- numpy==1.18.5
|
||||||
- cython
|
- cython
|
||||||
@@ -20,9 +21,9 @@ dependencies:
|
|||||||
|
|
||||||
- pip:
|
- pip:
|
||||||
# Required packages for AzureML execution, history, and data preparation.
|
# Required packages for AzureML execution, history, and data preparation.
|
||||||
- azureml-widgets
|
- azureml-widgets~=1.20.0
|
||||||
- pytorch-transformers==1.0.0
|
- pytorch-transformers==1.0.0
|
||||||
- spacy==2.1.8
|
- spacy==2.1.8
|
||||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||||
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_linux_requirements.txt [--no-deps]
|
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.20.0/validated_linux_requirements.txt [--no-deps]
|
||||||
|
|
||||||
|
|||||||
@@ -2,10 +2,11 @@ name: azure_automl
|
|||||||
dependencies:
|
dependencies:
|
||||||
# The python interpreter version.
|
# The python interpreter version.
|
||||||
# Currently Azure ML only supports 3.5.2 and later.
|
# Currently Azure ML only supports 3.5.2 and later.
|
||||||
- pip<=19.3.1
|
- pip==20.2.4
|
||||||
- nomkl
|
- nomkl
|
||||||
- python>=3.5.2,<3.6.8
|
- python>=3.5.2,<3.8
|
||||||
- nb_conda
|
- nb_conda
|
||||||
|
- boto3==1.15.18
|
||||||
- matplotlib==2.1.0
|
- matplotlib==2.1.0
|
||||||
- numpy==1.18.5
|
- numpy==1.18.5
|
||||||
- cython
|
- cython
|
||||||
@@ -21,8 +22,8 @@ dependencies:
|
|||||||
|
|
||||||
- pip:
|
- pip:
|
||||||
# Required packages for AzureML execution, history, and data preparation.
|
# Required packages for AzureML execution, history, and data preparation.
|
||||||
- azureml-widgets
|
- azureml-widgets~=1.20.0
|
||||||
- pytorch-transformers==1.0.0
|
- pytorch-transformers==1.0.0
|
||||||
- spacy==2.1.8
|
- spacy==2.1.8
|
||||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||||
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_darwin_requirements.txt [--no-deps]
|
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.20.0/validated_darwin_requirements.txt [--no-deps]
|
||||||
|
|||||||
@@ -105,7 +105,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -167,7 +167,7 @@
|
|||||||
"You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
|
"You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
|
||||||
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
|
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
|
||||||
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
||||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota."
|
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -899,7 +899,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "anumamah"
|
"name": "ratanase"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -93,7 +93,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -450,7 +450,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "tzvikei"
|
"name": "ratanase"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -0,0 +1,589 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||||
|
"\n",
|
||||||
|
"Licensed under the MIT License."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
""
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Automated Machine Learning\n",
|
||||||
|
"_**Text Classification Using Deep Learning**_\n",
|
||||||
|
"\n",
|
||||||
|
"## Contents\n",
|
||||||
|
"1. [Introduction](#Introduction)\n",
|
||||||
|
"1. [Setup](#Setup)\n",
|
||||||
|
"1. [Data](#Data)\n",
|
||||||
|
"1. [Train](#Train)\n",
|
||||||
|
"1. [Evaluate](#Evaluate)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Introduction\n",
|
||||||
|
"This notebook demonstrates classification with text data using deep learning in AutoML.\n",
|
||||||
|
"\n",
|
||||||
|
"AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data. Depending on the compute cluster the user provides, AutoML tried out Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used, and Bidirectional Long-Short Term neural network (BiLSTM) when a CPU compute is used, thereby optimizing the choice of DNN for the uesr's setup.\n",
|
||||||
|
"\n",
|
||||||
|
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||||
|
"\n",
|
||||||
|
"Notebook synopsis:\n",
|
||||||
|
"\n",
|
||||||
|
"1. Creating an Experiment in an existing Workspace\n",
|
||||||
|
"2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification\n",
|
||||||
|
"3. Registering the best model for future use\n",
|
||||||
|
"4. Evaluating the final model on a test set"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Setup"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import logging\n",
|
||||||
|
"import os\n",
|
||||||
|
"import shutil\n",
|
||||||
|
"\n",
|
||||||
|
"import pandas as pd\n",
|
||||||
|
"\n",
|
||||||
|
"import azureml.core\n",
|
||||||
|
"from azureml.core.experiment import Experiment\n",
|
||||||
|
"from azureml.core.workspace import Workspace\n",
|
||||||
|
"from azureml.core.dataset import Dataset\n",
|
||||||
|
"from azureml.core.compute import AmlCompute\n",
|
||||||
|
"from azureml.core.compute import ComputeTarget\n",
|
||||||
|
"from azureml.core.run import Run\n",
|
||||||
|
"from azureml.widgets import RunDetails\n",
|
||||||
|
"from azureml.core.model import Model \n",
|
||||||
|
"from helper import run_inference, get_result_df\n",
|
||||||
|
"from azureml.train.automl import AutoMLConfig\n",
|
||||||
|
"from sklearn.datasets import fetch_20newsgroups"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"ws = Workspace.from_config()\n",
|
||||||
|
"\n",
|
||||||
|
"# Choose an experiment name.\n",
|
||||||
|
"experiment_name = 'automl-classification-text-dnn'\n",
|
||||||
|
"\n",
|
||||||
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
|
"\n",
|
||||||
|
"output = {}\n",
|
||||||
|
"output['Subscription ID'] = ws.subscription_id\n",
|
||||||
|
"output['Workspace Name'] = ws.name\n",
|
||||||
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
|
"output['Location'] = ws.location\n",
|
||||||
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Set up a compute cluster\n",
|
||||||
|
"This section uses a user-provided compute cluster (named \"dnntext-cluster\" in this example). If a cluster with this name does not exist in the user's workspace, the below code will create a new cluster. You can choose the parameters of the cluster as mentioned in the comments.\n",
|
||||||
|
"\n",
|
||||||
|
"Whether you provide/select a CPU or GPU cluster, AutoML will choose the appropriate DNN for that setup - BiLSTM or BERT text featurizer will be included in the candidate featurizers on CPU and GPU respectively. If your goal is to obtain the most accurate model, we recommend you use GPU clusters since BERT featurizers usually outperform BiLSTM featurizers."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||||
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
|
"\n",
|
||||||
|
"num_nodes = 2\n",
|
||||||
|
"\n",
|
||||||
|
"# Choose a name for your cluster.\n",
|
||||||
|
"amlcompute_cluster_name = \"dnntext-cluster\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Verify that cluster does not exist already\n",
|
||||||
|
"try:\n",
|
||||||
|
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
|
||||||
|
" print('Found existing cluster, use it.')\n",
|
||||||
|
"except ComputeTargetException:\n",
|
||||||
|
" compute_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_NC6\", # CPU for BiLSTM, such as \"STANDARD_D2_V2\" \n",
|
||||||
|
" # To use BERT (this is recommended for best performance), select a GPU such as \"STANDARD_NC6\" \n",
|
||||||
|
" # or similar GPU option\n",
|
||||||
|
" # available in your workspace\n",
|
||||||
|
" max_nodes = num_nodes)\n",
|
||||||
|
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
|
||||||
|
"\n",
|
||||||
|
"compute_target.wait_for_completion(show_output=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Get data\n",
|
||||||
|
"For this notebook we will use 20 Newsgroups data from scikit-learn. We filter the data to contain four classes and take a sample as training data. Please note that for accuracy improvement, more data is needed. For this notebook we provide a small-data example so that you can use this template to use with your larger sized data."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"data_dir = \"text-dnn-data\" # Local directory to store data\n",
|
||||||
|
"blobstore_datadir = data_dir # Blob store directory to store data in\n",
|
||||||
|
"target_column_name = 'y'\n",
|
||||||
|
"feature_column_name = 'X'\n",
|
||||||
|
"\n",
|
||||||
|
"def get_20newsgroups_data():\n",
|
||||||
|
" '''Fetches 20 Newsgroups data from scikit-learn\n",
|
||||||
|
" Returns them in form of pandas dataframes\n",
|
||||||
|
" '''\n",
|
||||||
|
" remove = ('headers', 'footers', 'quotes')\n",
|
||||||
|
" categories = [\n",
|
||||||
|
" 'rec.sport.baseball',\n",
|
||||||
|
" 'rec.sport.hockey',\n",
|
||||||
|
" 'comp.graphics',\n",
|
||||||
|
" 'sci.space',\n",
|
||||||
|
" ]\n",
|
||||||
|
"\n",
|
||||||
|
" data = fetch_20newsgroups(subset = 'train', categories = categories,\n",
|
||||||
|
" shuffle = True, random_state = 42,\n",
|
||||||
|
" remove = remove)\n",
|
||||||
|
" data = pd.DataFrame({feature_column_name: data.data, target_column_name: data.target})\n",
|
||||||
|
"\n",
|
||||||
|
" data_train = data[:200]\n",
|
||||||
|
" data_test = data[200:300] \n",
|
||||||
|
"\n",
|
||||||
|
" data_train = remove_blanks_20news(data_train, feature_column_name, target_column_name)\n",
|
||||||
|
" data_test = remove_blanks_20news(data_test, feature_column_name, target_column_name)\n",
|
||||||
|
" \n",
|
||||||
|
" return data_train, data_test\n",
|
||||||
|
" \n",
|
||||||
|
"def remove_blanks_20news(data, feature_column_name, target_column_name):\n",
|
||||||
|
" \n",
|
||||||
|
" data[feature_column_name] = data[feature_column_name].replace(r'\\n', ' ', regex=True).apply(lambda x: x.strip())\n",
|
||||||
|
" data = data[data[feature_column_name] != '']\n",
|
||||||
|
" \n",
|
||||||
|
" return data"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Fetch data and upload to datastore for use in training"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"data_train, data_test = get_20newsgroups_data()\n",
|
||||||
|
"\n",
|
||||||
|
"if not os.path.isdir(data_dir):\n",
|
||||||
|
" os.mkdir(data_dir)\n",
|
||||||
|
" \n",
|
||||||
|
"train_data_fname = data_dir + '/train_data.csv'\n",
|
||||||
|
"test_data_fname = data_dir + '/test_data.csv'\n",
|
||||||
|
"\n",
|
||||||
|
"data_train.to_csv(train_data_fname, index=False)\n",
|
||||||
|
"data_test.to_csv(test_data_fname, index=False)\n",
|
||||||
|
"\n",
|
||||||
|
"datastore = ws.get_default_datastore()\n",
|
||||||
|
"datastore.upload(src_dir=data_dir, target_path=blobstore_datadir,\n",
|
||||||
|
" overwrite=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"train_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/train_data.csv')])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Prepare AutoML run"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"This notebook uses the blocked_models parameter to exclude some models that can take a longer time to train on some text datasets. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"automl_settings = {\n",
|
||||||
|
" \"experiment_timeout_minutes\": 20,\n",
|
||||||
|
" \"primary_metric\": 'accuracy',\n",
|
||||||
|
" \"max_concurrent_iterations\": num_nodes, \n",
|
||||||
|
" \"max_cores_per_iteration\": -1,\n",
|
||||||
|
" \"enable_dnn\": True,\n",
|
||||||
|
" \"enable_early_stopping\": True,\n",
|
||||||
|
" \"validation_size\": 0.3,\n",
|
||||||
|
" \"verbosity\": logging.INFO,\n",
|
||||||
|
" \"enable_voting_ensemble\": False,\n",
|
||||||
|
" \"enable_stack_ensemble\": False,\n",
|
||||||
|
"}\n",
|
||||||
|
"\n",
|
||||||
|
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||||
|
" debug_log = 'automl_errors.log',\n",
|
||||||
|
" compute_target=compute_target,\n",
|
||||||
|
" training_data=train_dataset,\n",
|
||||||
|
" label_column_name=target_column_name,\n",
|
||||||
|
" blocked_models = ['LightGBM', 'XGBoostClassifier'],\n",
|
||||||
|
" **automl_settings\n",
|
||||||
|
" )"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Submit AutoML Run"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"automl_run = experiment.submit(automl_config, show_output=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"automl_run"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Retrieve the Best Model\n",
|
||||||
|
"Below we select the best model pipeline from our iterations, use it to test on test data on the same compute cluster."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"You can test the model locally to get a feel of the input/output. When the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your MachineLearningNotebooks folder here:\n",
|
||||||
|
"MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl_env.yml"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"best_run, fitted_model = automl_run.get_output()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"You can now see what text transformations are used to convert text data to features for this dataset, including deep learning transformations based on BiLSTM or Transformer (BERT is one implementation of a Transformer) models."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"text_transformations_used = []\n",
|
||||||
|
"for column_group in fitted_model.named_steps['datatransformer'].get_featurization_summary():\n",
|
||||||
|
" text_transformations_used.extend(column_group['Transformations'])\n",
|
||||||
|
"text_transformations_used"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Registering the best model\n",
|
||||||
|
"We now register the best fitted model from the AutoML Run for use in future deployments. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Get results stats, extract the best model from AutoML run, download and register the resultant best model"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"summary_df = get_result_df(automl_run)\n",
|
||||||
|
"best_dnn_run_id = summary_df['run_id'].iloc[0]\n",
|
||||||
|
"best_dnn_run = Run(experiment, best_dnn_run_id)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"model_dir = 'Model' # Local folder where the model will be stored temporarily\n",
|
||||||
|
"if not os.path.isdir(model_dir):\n",
|
||||||
|
" os.mkdir(model_dir)\n",
|
||||||
|
" \n",
|
||||||
|
"best_dnn_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Register the model in your Azure Machine Learning Workspace. If you previously registered a model, please make sure to delete it so as to replace it with this new model."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Register the model\n",
|
||||||
|
"model_name = 'textDNN-20News'\n",
|
||||||
|
"model = Model.register(model_path = model_dir + '/model.pkl',\n",
|
||||||
|
" model_name = model_name,\n",
|
||||||
|
" tags=None,\n",
|
||||||
|
" workspace=ws)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Evaluate on Test Data"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"We now use the best fitted model from the AutoML Run to make predictions on the test set. \n",
|
||||||
|
"\n",
|
||||||
|
"Test set schema should match that of the training set."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"test_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/test_data.csv')])\n",
|
||||||
|
"\n",
|
||||||
|
"# preview the first 3 rows of the dataset\n",
|
||||||
|
"test_dataset.take(3).to_pandas_dataframe()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"test_experiment = Experiment(ws, experiment_name + \"_test\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"script_folder = os.path.join(os.getcwd(), 'inference')\n",
|
||||||
|
"os.makedirs(script_folder, exist_ok=True)\n",
|
||||||
|
"shutil.copy('infer.py', script_folder)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"test_run = run_inference(test_experiment, compute_target, script_folder, best_dnn_run,\n",
|
||||||
|
" train_dataset, test_dataset, target_column_name, model_name)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Display computed metrics"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"test_run"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"RunDetails(test_run).show()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"test_run.wait_for_completion()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"pd.Series(test_run.get_metrics())"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"authors": [
|
||||||
|
{
|
||||||
|
"name": "anshirga"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"compute": [
|
||||||
|
"AML Compute"
|
||||||
|
],
|
||||||
|
"datasets": [
|
||||||
|
"None"
|
||||||
|
],
|
||||||
|
"deployment": [
|
||||||
|
"None"
|
||||||
|
],
|
||||||
|
"exclude_from_index": false,
|
||||||
|
"framework": [
|
||||||
|
"None"
|
||||||
|
],
|
||||||
|
"friendly_name": "DNN Text Featurization",
|
||||||
|
"index_order": 2,
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3.6",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python36"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.6.7"
|
||||||
|
},
|
||||||
|
"tags": [
|
||||||
|
"None"
|
||||||
|
],
|
||||||
|
"task": "Text featurization using DNNs for classification"
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 2
|
||||||
|
}
|
||||||
@@ -0,0 +1,4 @@
|
|||||||
|
name: auto-ml-classification-text-dnn
|
||||||
|
dependencies:
|
||||||
|
- pip:
|
||||||
|
- azureml-sdk
|
||||||
@@ -0,0 +1,56 @@
|
|||||||
|
import pandas as pd
|
||||||
|
from azureml.core import Environment
|
||||||
|
from azureml.train.estimator import Estimator
|
||||||
|
from azureml.core.run import Run
|
||||||
|
|
||||||
|
|
||||||
|
def run_inference(test_experiment, compute_target, script_folder, train_run,
|
||||||
|
train_dataset, test_dataset, target_column_name, model_name):
|
||||||
|
|
||||||
|
inference_env = train_run.get_environment()
|
||||||
|
|
||||||
|
est = Estimator(source_directory=script_folder,
|
||||||
|
entry_script='infer.py',
|
||||||
|
script_params={
|
||||||
|
'--target_column_name': target_column_name,
|
||||||
|
'--model_name': model_name
|
||||||
|
},
|
||||||
|
inputs=[
|
||||||
|
train_dataset.as_named_input('train_data'),
|
||||||
|
test_dataset.as_named_input('test_data')
|
||||||
|
],
|
||||||
|
compute_target=compute_target,
|
||||||
|
environment_definition=inference_env)
|
||||||
|
|
||||||
|
run = test_experiment.submit(
|
||||||
|
est, tags={
|
||||||
|
'training_run_id': train_run.id,
|
||||||
|
'run_algorithm': train_run.properties['run_algorithm'],
|
||||||
|
'valid_score': train_run.properties['score'],
|
||||||
|
'primary_metric': train_run.properties['primary_metric']
|
||||||
|
})
|
||||||
|
|
||||||
|
run.log("run_algorithm", run.tags['run_algorithm'])
|
||||||
|
return run
|
||||||
|
|
||||||
|
|
||||||
|
def get_result_df(remote_run):
|
||||||
|
|
||||||
|
children = list(remote_run.get_children(recursive=True))
|
||||||
|
summary_df = pd.DataFrame(index=['run_id', 'run_algorithm',
|
||||||
|
'primary_metric', 'Score'])
|
||||||
|
goal_minimize = False
|
||||||
|
for run in children:
|
||||||
|
if('run_algorithm' in run.properties and 'score' in run.properties):
|
||||||
|
summary_df[run.id] = [run.id, run.properties['run_algorithm'],
|
||||||
|
run.properties['primary_metric'],
|
||||||
|
float(run.properties['score'])]
|
||||||
|
if('goal' in run.properties):
|
||||||
|
goal_minimize = run.properties['goal'].split('_')[-1] == 'min'
|
||||||
|
|
||||||
|
summary_df = summary_df.T.sort_values(
|
||||||
|
'Score',
|
||||||
|
ascending=goal_minimize).drop_duplicates(['run_algorithm'])
|
||||||
|
summary_df = summary_df.set_index('run_algorithm')
|
||||||
|
|
||||||
|
return summary_df
|
||||||
@@ -0,0 +1,60 @@
|
|||||||
|
import argparse
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
from sklearn.externals import joblib
|
||||||
|
|
||||||
|
from azureml.automl.runtime.shared.score import scoring, constants
|
||||||
|
from azureml.core import Run
|
||||||
|
from azureml.core.model import Model
|
||||||
|
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument(
|
||||||
|
'--target_column_name', type=str, dest='target_column_name',
|
||||||
|
help='Target Column Name')
|
||||||
|
parser.add_argument(
|
||||||
|
'--model_name', type=str, dest='model_name',
|
||||||
|
help='Name of registered model')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
target_column_name = args.target_column_name
|
||||||
|
model_name = args.model_name
|
||||||
|
|
||||||
|
print('args passed are: ')
|
||||||
|
print('Target column name: ', target_column_name)
|
||||||
|
print('Name of registered model: ', model_name)
|
||||||
|
|
||||||
|
model_path = Model.get_model_path(model_name)
|
||||||
|
# deserialize the model file back into a sklearn model
|
||||||
|
model = joblib.load(model_path)
|
||||||
|
|
||||||
|
run = Run.get_context()
|
||||||
|
# get input dataset by name
|
||||||
|
test_dataset = run.input_datasets['test_data']
|
||||||
|
train_dataset = run.input_datasets['train_data']
|
||||||
|
|
||||||
|
X_test_df = test_dataset.drop_columns(columns=[target_column_name]) \
|
||||||
|
.to_pandas_dataframe()
|
||||||
|
y_test_df = test_dataset.with_timestamp_columns(None) \
|
||||||
|
.keep_columns(columns=[target_column_name]) \
|
||||||
|
.to_pandas_dataframe()
|
||||||
|
y_train_df = test_dataset.with_timestamp_columns(None) \
|
||||||
|
.keep_columns(columns=[target_column_name]) \
|
||||||
|
.to_pandas_dataframe()
|
||||||
|
|
||||||
|
predicted = model.predict_proba(X_test_df)
|
||||||
|
|
||||||
|
# Use the AutoML scoring module
|
||||||
|
class_labels = np.unique(np.concatenate((y_train_df.values, y_test_df.values)))
|
||||||
|
train_labels = model.classes_
|
||||||
|
classification_metrics = list(constants.CLASSIFICATION_SCALAR_SET)
|
||||||
|
scores = scoring.score_classification(y_test_df.values, predicted,
|
||||||
|
classification_metrics,
|
||||||
|
class_labels, train_labels)
|
||||||
|
|
||||||
|
print("scores:")
|
||||||
|
print(scores)
|
||||||
|
|
||||||
|
for key, value in scores.items():
|
||||||
|
run.log(key, value)
|
||||||
@@ -32,13 +32,6 @@
|
|||||||
"8. [Test Retraining](#Test-Retraining)"
|
"8. [Test Retraining](#Test-Retraining)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -88,7 +81,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -150,7 +143,7 @@
|
|||||||
"You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
|
"You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
|
||||||
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
|
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
|
||||||
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
||||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota."
|
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -550,7 +543,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "anshirga"
|
"name": "vivijay"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
|
|||||||
@@ -68,6 +68,7 @@
|
|||||||
"import logging\n",
|
"import logging\n",
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
|
"import json\n",
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
" \n",
|
" \n",
|
||||||
@@ -92,7 +93,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -138,7 +139,8 @@
|
|||||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for your CPU cluster\n",
|
"# Choose a name for your CPU cluster\n",
|
||||||
"cpu_cluster_name = \"reg-cluster\"\n",
|
"# Try to ensure that the cluster name is unique across the notebooks\n",
|
||||||
|
"cpu_cluster_name = \"reg-model-proxy\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Verify that cluster does not exist already\n",
|
"# Verify that cluster does not exist already\n",
|
||||||
"try:\n",
|
"try:\n",
|
||||||
@@ -197,6 +199,7 @@
|
|||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**training_data**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**training_data**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**label_column_name**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
"|**label_column_name**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
||||||
|
"|**scenario**|We need to set this parameter to 'Latest' to enable some experimental features. This parameter should not be set outside of this experimental notebook.|\n",
|
||||||
"\n",
|
"\n",
|
||||||
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
||||||
]
|
]
|
||||||
@@ -225,6 +228,7 @@
|
|||||||
" compute_target = compute_target,\n",
|
" compute_target = compute_target,\n",
|
||||||
" training_data = train_data,\n",
|
" training_data = train_data,\n",
|
||||||
" label_column_name = label,\n",
|
" label_column_name = label,\n",
|
||||||
|
" scenario='Latest',\n",
|
||||||
" **automl_settings\n",
|
" **automl_settings\n",
|
||||||
" )"
|
" )"
|
||||||
]
|
]
|
||||||
@@ -321,6 +325,24 @@
|
|||||||
"print(best_run)"
|
"print(best_run)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Show hyperparameters\n",
|
||||||
|
"Show the model pipeline used for the best run with its hyperparameters."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"run_properties = json.loads(best_run.get_details()['properties']['pipeline_script'])\n",
|
||||||
|
"print(json.dumps(run_properties, indent = 1)) "
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -451,7 +473,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "rakellam"
|
"name": "sekrupa"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"categories": [
|
"categories": [
|
||||||
|
|||||||
@@ -54,9 +54,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)\n",
|
|
||||||
"\n",
|
|
||||||
"Notebook synopsis:\n",
|
"Notebook synopsis:\n",
|
||||||
|
"\n",
|
||||||
"1. Creating an Experiment in an existing Workspace\n",
|
"1. Creating an Experiment in an existing Workspace\n",
|
||||||
"2. Configuration and remote run of AutoML for a time-series model exploring Regression learners, Arima, Prophet and DNNs\n",
|
"2. Configuration and remote run of AutoML for a time-series model exploring Regression learners, Arima, Prophet and DNNs\n",
|
||||||
"4. Evaluating the fitted model using a rolling test "
|
"4. Evaluating the fitted model using a rolling test "
|
||||||
@@ -114,7 +113,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -350,9 +349,7 @@
|
|||||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
||||||
"|**label_column_name**|The name of the label column.|\n",
|
"|**label_column_name**|The name of the label column.|\n",
|
||||||
"|**enable_dnn**|Enable Forecasting DNNs|\n",
|
"|**enable_dnn**|Enable Forecasting DNNs|\n"
|
||||||
"\n",
|
|
||||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -650,7 +647,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "omkarm"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"hide_code_all_hidden": false,
|
"hide_code_all_hidden": false,
|
||||||
|
|||||||
@@ -3,11 +3,11 @@ from azureml.core import Environment
|
|||||||
from azureml.core.conda_dependencies import CondaDependencies
|
from azureml.core.conda_dependencies import CondaDependencies
|
||||||
from azureml.train.estimator import Estimator
|
from azureml.train.estimator import Estimator
|
||||||
from azureml.core.run import Run
|
from azureml.core.run import Run
|
||||||
|
from azureml.automl.core.shared import constants
|
||||||
|
|
||||||
|
|
||||||
def split_fraction_by_grain(df, fraction, time_column_name,
|
def split_fraction_by_grain(df, fraction, time_column_name,
|
||||||
grain_column_names=None):
|
grain_column_names=None):
|
||||||
|
|
||||||
if not grain_column_names:
|
if not grain_column_names:
|
||||||
df['tmp_grain_column'] = 'grain'
|
df['tmp_grain_column'] = 'grain'
|
||||||
grain_column_names = ['tmp_grain_column']
|
grain_column_names = ['tmp_grain_column']
|
||||||
@@ -59,11 +59,13 @@ def get_result_df(remote_run):
|
|||||||
'primary_metric', 'Score'])
|
'primary_metric', 'Score'])
|
||||||
goal_minimize = False
|
goal_minimize = False
|
||||||
for run in children:
|
for run in children:
|
||||||
if('run_algorithm' in run.properties and 'score' in run.properties):
|
if run.get_status().lower() == constants.RunState.COMPLETE_RUN \
|
||||||
|
and 'run_algorithm' in run.properties and 'score' in run.properties:
|
||||||
|
# We only count in the completed child runs.
|
||||||
summary_df[run.id] = [run.id, run.properties['run_algorithm'],
|
summary_df[run.id] = [run.id, run.properties['run_algorithm'],
|
||||||
run.properties['primary_metric'],
|
run.properties['primary_metric'],
|
||||||
float(run.properties['score'])]
|
float(run.properties['score'])]
|
||||||
if('goal' in run.properties):
|
if ('goal' in run.properties):
|
||||||
goal_minimize = run.properties['goal'].split('_')[-1] == 'min'
|
goal_minimize = run.properties['goal'].split('_')[-1] == 'min'
|
||||||
|
|
||||||
summary_df = summary_df.T.sort_values(
|
summary_df = summary_df.T.sort_values(
|
||||||
@@ -118,7 +120,6 @@ def run_multiple_inferences(summary_df, train_experiment, test_experiment,
|
|||||||
compute_target, script_folder, test_dataset,
|
compute_target, script_folder, test_dataset,
|
||||||
lookback_dataset, max_horizon, target_column_name,
|
lookback_dataset, max_horizon, target_column_name,
|
||||||
time_column_name, freq):
|
time_column_name, freq):
|
||||||
|
|
||||||
for run_name, run_summary in summary_df.iterrows():
|
for run_name, run_summary in summary_df.iterrows():
|
||||||
print(run_name)
|
print(run_name)
|
||||||
print(run_summary)
|
print(run_summary)
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import argparse
|
import argparse
|
||||||
|
import os
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
@@ -10,6 +11,13 @@ from sklearn.metrics import mean_absolute_error, mean_squared_error
|
|||||||
from azureml.automl.runtime.shared.score import scoring, constants
|
from azureml.automl.runtime.shared.score import scoring, constants
|
||||||
from azureml.core import Run
|
from azureml.core import Run
|
||||||
|
|
||||||
|
try:
|
||||||
|
import torch
|
||||||
|
|
||||||
|
_torch_present = True
|
||||||
|
except ImportError:
|
||||||
|
_torch_present = False
|
||||||
|
|
||||||
|
|
||||||
def align_outputs(y_predicted, X_trans, X_test, y_test,
|
def align_outputs(y_predicted, X_trans, X_test, y_test,
|
||||||
predicted_column_name='predicted',
|
predicted_column_name='predicted',
|
||||||
@@ -48,7 +56,7 @@ def align_outputs(y_predicted, X_trans, X_test, y_test,
|
|||||||
# or at edges of time due to lags/rolling windows
|
# or at edges of time due to lags/rolling windows
|
||||||
clean = together[together[[target_column_name,
|
clean = together[together[[target_column_name,
|
||||||
predicted_column_name]].notnull().all(axis=1)]
|
predicted_column_name]].notnull().all(axis=1)]
|
||||||
return(clean)
|
return (clean)
|
||||||
|
|
||||||
|
|
||||||
def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
|
def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
|
||||||
@@ -83,8 +91,7 @@ def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
|
|||||||
if origin_time != X[time_column_name].min():
|
if origin_time != X[time_column_name].min():
|
||||||
# Set the context by including actuals up-to the origin time
|
# Set the context by including actuals up-to the origin time
|
||||||
test_context_expand_wind = (X[time_column_name] < origin_time)
|
test_context_expand_wind = (X[time_column_name] < origin_time)
|
||||||
context_expand_wind = (
|
context_expand_wind = (X_test_expand[time_column_name] < origin_time)
|
||||||
X_test_expand[time_column_name] < origin_time)
|
|
||||||
y_query_expand[context_expand_wind] = y[test_context_expand_wind]
|
y_query_expand[context_expand_wind] = y[test_context_expand_wind]
|
||||||
|
|
||||||
# Print some debug info
|
# Print some debug info
|
||||||
@@ -115,8 +122,7 @@ def do_rolling_forecast_with_lookback(fitted_model, X_test, y_test,
|
|||||||
# Align forecast with test set for dates within
|
# Align forecast with test set for dates within
|
||||||
# the current rolling window
|
# the current rolling window
|
||||||
trans_tindex = X_trans.index.get_level_values(time_column_name)
|
trans_tindex = X_trans.index.get_level_values(time_column_name)
|
||||||
trans_roll_wind = (trans_tindex >= origin_time) & (
|
trans_roll_wind = (trans_tindex >= origin_time) & (trans_tindex < horizon_time)
|
||||||
trans_tindex < horizon_time)
|
|
||||||
test_roll_wind = expand_wind & (X[time_column_name] >= origin_time)
|
test_roll_wind = expand_wind & (X[time_column_name] >= origin_time)
|
||||||
df_list.append(align_outputs(
|
df_list.append(align_outputs(
|
||||||
y_fcst[trans_roll_wind], X_trans[trans_roll_wind],
|
y_fcst[trans_roll_wind], X_trans[trans_roll_wind],
|
||||||
@@ -155,8 +161,7 @@ def do_rolling_forecast(fitted_model, X_test, y_test, max_horizon, freq='D'):
|
|||||||
if origin_time != X_test[time_column_name].min():
|
if origin_time != X_test[time_column_name].min():
|
||||||
# Set the context by including actuals up-to the origin time
|
# Set the context by including actuals up-to the origin time
|
||||||
test_context_expand_wind = (X_test[time_column_name] < origin_time)
|
test_context_expand_wind = (X_test[time_column_name] < origin_time)
|
||||||
context_expand_wind = (
|
context_expand_wind = (X_test_expand[time_column_name] < origin_time)
|
||||||
X_test_expand[time_column_name] < origin_time)
|
|
||||||
y_query_expand[context_expand_wind] = y_test[
|
y_query_expand[context_expand_wind] = y_test[
|
||||||
test_context_expand_wind]
|
test_context_expand_wind]
|
||||||
|
|
||||||
@@ -186,10 +191,8 @@ def do_rolling_forecast(fitted_model, X_test, y_test, max_horizon, freq='D'):
|
|||||||
# Align forecast with test set for dates within the
|
# Align forecast with test set for dates within the
|
||||||
# current rolling window
|
# current rolling window
|
||||||
trans_tindex = X_trans.index.get_level_values(time_column_name)
|
trans_tindex = X_trans.index.get_level_values(time_column_name)
|
||||||
trans_roll_wind = (trans_tindex >= origin_time) & (
|
trans_roll_wind = (trans_tindex >= origin_time) & (trans_tindex < horizon_time)
|
||||||
trans_tindex < horizon_time)
|
test_roll_wind = expand_wind & (X_test[time_column_name] >= origin_time)
|
||||||
test_roll_wind = expand_wind & (
|
|
||||||
X_test[time_column_name] >= origin_time)
|
|
||||||
df_list.append(align_outputs(y_fcst[trans_roll_wind],
|
df_list.append(align_outputs(y_fcst[trans_roll_wind],
|
||||||
X_trans[trans_roll_wind],
|
X_trans[trans_roll_wind],
|
||||||
X_test[test_roll_wind],
|
X_test[test_roll_wind],
|
||||||
@@ -221,6 +224,10 @@ def MAPE(actual, pred):
|
|||||||
return np.mean(APE(actual_safe, pred_safe))
|
return np.mean(APE(actual_safe, pred_safe))
|
||||||
|
|
||||||
|
|
||||||
|
def map_location_cuda(storage, loc):
|
||||||
|
return storage.cuda()
|
||||||
|
|
||||||
|
|
||||||
parser = argparse.ArgumentParser()
|
parser = argparse.ArgumentParser()
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'--max_horizon', type=int, dest='max_horizon',
|
'--max_horizon', type=int, dest='max_horizon',
|
||||||
@@ -238,7 +245,6 @@ parser.add_argument(
|
|||||||
'--model_path', type=str, dest='model_path',
|
'--model_path', type=str, dest='model_path',
|
||||||
default='model.pkl', help='Filename of model to be loaded')
|
default='model.pkl', help='Filename of model to be loaded')
|
||||||
|
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
max_horizon = args.max_horizon
|
max_horizon = args.max_horizon
|
||||||
target_column_name = args.target_column_name
|
target_column_name = args.target_column_name
|
||||||
@@ -246,7 +252,6 @@ time_column_name = args.time_column_name
|
|||||||
freq = args.freq
|
freq = args.freq
|
||||||
model_path = args.model_path
|
model_path = args.model_path
|
||||||
|
|
||||||
|
|
||||||
print('args passed are: ')
|
print('args passed are: ')
|
||||||
print(max_horizon)
|
print(max_horizon)
|
||||||
print(target_column_name)
|
print(target_column_name)
|
||||||
@@ -274,8 +279,19 @@ X_lookback_df = lookback_dataset.drop_columns(columns=[target_column_name])
|
|||||||
y_lookback_df = lookback_dataset.with_timestamp_columns(
|
y_lookback_df = lookback_dataset.with_timestamp_columns(
|
||||||
None).keep_columns(columns=[target_column_name])
|
None).keep_columns(columns=[target_column_name])
|
||||||
|
|
||||||
fitted_model = joblib.load(model_path)
|
_, ext = os.path.splitext(model_path)
|
||||||
|
if ext == '.pt':
|
||||||
|
# Load the fc-tcn torch model.
|
||||||
|
assert _torch_present
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
map_location = map_location_cuda
|
||||||
|
else:
|
||||||
|
map_location = 'cpu'
|
||||||
|
with open(model_path, 'rb') as fh:
|
||||||
|
fitted_model = torch.load(fh, map_location=map_location)
|
||||||
|
else:
|
||||||
|
# Load the sklearn pipeline.
|
||||||
|
fitted_model = joblib.load(model_path)
|
||||||
|
|
||||||
if hasattr(fitted_model, 'get_lookback'):
|
if hasattr(fitted_model, 'get_lookback'):
|
||||||
lookback = fitted_model.get_lookback()
|
lookback = fitted_model.get_lookback()
|
||||||
|
|||||||
@@ -87,7 +87,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -131,7 +131,7 @@
|
|||||||
"You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
|
"You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
|
||||||
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
|
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
|
||||||
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
||||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota."
|
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -548,6 +548,9 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
"For more details on what metrics are included and how they are calculated, please refer to [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics). You could also calculate residuals, like described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
"Since we did a rolling evaluation on the test set, we can analyze the predictions by their forecast horizon relative to the rolling origin. The model was initially trained at a forecast horizon of 14, so each prediction from the model is associated with a horizon value from 1 to 14. The horizon values are in a column named, \"horizon_origin,\" in the prediction set. For example, we can calculate some of the error metrics grouped by the horizon:"
|
"Since we did a rolling evaluation on the test set, we can analyze the predictions by their forecast horizon relative to the rolling origin. The model was initially trained at a forecast horizon of 14, so each prediction from the model is associated with a horizon value from 1 to 14. The horizon values are in a column named, \"horizon_origin,\" in the prediction set. For example, we can calculate some of the error metrics grouped by the horizon:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -594,7 +597,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "erwright"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -1,22 +1,24 @@
|
|||||||
import argparse
|
import argparse
|
||||||
import azureml.train.automl
|
from azureml.core import Dataset, Run
|
||||||
from azureml.core import Run
|
|
||||||
from sklearn.externals import joblib
|
from sklearn.externals import joblib
|
||||||
|
|
||||||
|
|
||||||
parser = argparse.ArgumentParser()
|
parser = argparse.ArgumentParser()
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'--target_column_name', type=str, dest='target_column_name',
|
'--target_column_name', type=str, dest='target_column_name',
|
||||||
help='Target Column Name')
|
help='Target Column Name')
|
||||||
|
parser.add_argument(
|
||||||
|
'--test_dataset', type=str, dest='test_dataset',
|
||||||
|
help='Test Dataset')
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
target_column_name = args.target_column_name
|
target_column_name = args.target_column_name
|
||||||
|
test_dataset_id = args.test_dataset
|
||||||
|
|
||||||
run = Run.get_context()
|
run = Run.get_context()
|
||||||
# get input dataset by name
|
ws = run.experiment.workspace
|
||||||
test_dataset = run.input_datasets['test_data']
|
|
||||||
|
|
||||||
df = test_dataset.to_pandas_dataframe().reset_index(drop=True)
|
# get the input dataset by id
|
||||||
|
test_dataset = Dataset.get_by_id(ws, id=test_dataset_id)
|
||||||
|
|
||||||
X_test_df = test_dataset.drop_columns(columns=[target_column_name]).to_pandas_dataframe().reset_index(drop=True)
|
X_test_df = test_dataset.drop_columns(columns=[target_column_name]).to_pandas_dataframe().reset_index(drop=True)
|
||||||
y_test_df = test_dataset.with_timestamp_columns(None).keep_columns(columns=[target_column_name]).to_pandas_dataframe()
|
y_test_df = test_dataset.with_timestamp_columns(None).keep_columns(columns=[target_column_name]).to_pandas_dataframe()
|
||||||
|
|||||||
@@ -1,29 +1,32 @@
|
|||||||
from azureml.train.estimator import Estimator
|
from azureml.core import ScriptRunConfig
|
||||||
|
|
||||||
|
|
||||||
def run_rolling_forecast(test_experiment, compute_target, train_run, test_dataset,
|
def run_rolling_forecast(test_experiment, compute_target, train_run,
|
||||||
target_column_name, inference_folder='./forecast'):
|
test_dataset, target_column_name,
|
||||||
|
inference_folder='./forecast'):
|
||||||
train_run.download_file('outputs/model.pkl',
|
train_run.download_file('outputs/model.pkl',
|
||||||
inference_folder + '/model.pkl')
|
inference_folder + '/model.pkl')
|
||||||
|
|
||||||
inference_env = train_run.get_environment()
|
inference_env = train_run.get_environment()
|
||||||
|
|
||||||
est = Estimator(source_directory=inference_folder,
|
config = ScriptRunConfig(source_directory=inference_folder,
|
||||||
entry_script='forecasting_script.py',
|
script='forecasting_script.py',
|
||||||
script_params={
|
arguments=['--target_column_name',
|
||||||
'--target_column_name': target_column_name
|
target_column_name,
|
||||||
},
|
'--test_dataset',
|
||||||
inputs=[test_dataset.as_named_input('test_data')],
|
test_dataset.as_named_input(test_dataset.name)],
|
||||||
compute_target=compute_target,
|
compute_target=compute_target,
|
||||||
environment_definition=inference_env)
|
environment=inference_env)
|
||||||
|
|
||||||
run = test_experiment.submit(est,
|
run = test_experiment.submit(config,
|
||||||
tags={
|
tags={'training_run_id':
|
||||||
'training_run_id': train_run.id,
|
train_run.id,
|
||||||
'run_algorithm': train_run.properties['run_algorithm'],
|
'run_algorithm':
|
||||||
'valid_score': train_run.properties['score'],
|
train_run.properties['run_algorithm'],
|
||||||
'primary_metric': train_run.properties['primary_metric']
|
'valid_score':
|
||||||
})
|
train_run.properties['score'],
|
||||||
|
'primary_metric':
|
||||||
|
train_run.properties['primary_metric']})
|
||||||
|
|
||||||
run.log("run_algorithm", run.tags['run_algorithm'])
|
run.log("run_algorithm", run.tags['run_algorithm'])
|
||||||
return run
|
return run
|
||||||
|
|||||||
@@ -97,7 +97,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -497,7 +497,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Evaluate\n",
|
"### Evaluate\n",
|
||||||
"To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).\n",
|
"To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows."
|
"It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows."
|
||||||
]
|
]
|
||||||
@@ -703,7 +703,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "erwright"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"categories": [
|
"categories": [
|
||||||
|
|||||||
@@ -24,7 +24,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Introduction\n",
|
"## Introduction\n",
|
||||||
"This notebook demonstrates the full interface to the `forecast()` function. \n",
|
"This notebook demonstrates the full interface of the `forecast()` function. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. \n",
|
"The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. \n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -94,7 +94,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -809,7 +809,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "erwright"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -82,7 +82,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -126,7 +126,7 @@
|
|||||||
"You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
|
"You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
|
||||||
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
|
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
|
||||||
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
||||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota."
|
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -325,12 +325,11 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Customization\n",
|
"## Customization\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,\n",
|
"The featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:\n",
|
||||||
|
"\n",
|
||||||
"1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.\n",
|
"1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.\n",
|
||||||
"2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.\n",
|
"2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.\n",
|
||||||
"3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.\n",
|
"3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data."
|
||||||
"\n",
|
|
||||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -383,7 +382,7 @@
|
|||||||
"The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.\n",
|
"The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We note here that AutoML can sweep over two types of time-series models:\n",
|
"We note here that AutoML can sweep over two types of time-series models:\n",
|
||||||
"* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace#upgrade).\n",
|
"* Models that are trained for each series such as ARIMA and Facebook's Prophet.\n",
|
||||||
"* Models trained across multiple time-series using a regression approach.\n",
|
"* Models trained across multiple time-series using a regression approach.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. \n",
|
"In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. \n",
|
||||||
@@ -572,7 +571,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Evaluate\n",
|
"# Evaluate\n",
|
||||||
"\n",
|
"\n",
|
||||||
"To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). \n",
|
"To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics."
|
"We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics."
|
||||||
]
|
]
|
||||||
@@ -764,7 +763,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "erwright"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -96,7 +96,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -359,7 +359,7 @@
|
|||||||
"Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data.\n",
|
"Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Run the explanation\n",
|
"### Run the explanation\n",
|
||||||
"#### Download engineered feature importance from artifact store\n",
|
"#### Download the engineered feature importance from artifact store\n",
|
||||||
"You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
|
"You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -375,6 +375,25 @@
|
|||||||
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
|
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Download the raw feature importance from artifact store\n",
|
||||||
|
"You can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"raw_explanations = client.download_model_explanation(raw=True)\n",
|
||||||
|
"print(raw_explanations.get_feature_importance_dict())\n",
|
||||||
|
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -474,6 +493,29 @@
|
|||||||
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
|
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Use Mimic Explainer for computing and visualizing raw feature importance\n",
|
||||||
|
"The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Compute the raw explanations\n",
|
||||||
|
"raw_explanations = explainer.explain(['local', 'global'], get_raw=True,\n",
|
||||||
|
" raw_feature_names=automl_explainer_setup_obj.raw_feature_names,\n",
|
||||||
|
" eval_dataset=automl_explainer_setup_obj.X_test_transform,\n",
|
||||||
|
" raw_eval_dataset=automl_explainer_setup_obj.X_test_raw)\n",
|
||||||
|
"print(raw_explanations.get_feature_importance_dict())\n",
|
||||||
|
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -589,10 +631,13 @@
|
|||||||
" automl_explainer_setup_obj = automl_setup_model_explanations(automl_model,\n",
|
" automl_explainer_setup_obj = automl_setup_model_explanations(automl_model,\n",
|
||||||
" X_test=data, task='classification')\n",
|
" X_test=data, task='classification')\n",
|
||||||
" # Retrieve model explanations for engineered explanations\n",
|
" # Retrieve model explanations for engineered explanations\n",
|
||||||
" engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) \n",
|
" engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform)\n",
|
||||||
|
" # Retrieve model explanations for raw explanations\n",
|
||||||
|
" raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True)\n",
|
||||||
" # You can return any data type as long as it is JSON-serializable\n",
|
" # You can return any data type as long as it is JSON-serializable\n",
|
||||||
" return {'predictions': predictions.tolist(),\n",
|
" return {'predictions': predictions.tolist(),\n",
|
||||||
" 'engineered_local_importance_values': engineered_local_importance_values}\n"
|
" 'engineered_local_importance_values': engineered_local_importance_values,\n",
|
||||||
|
" 'raw_local_importance_values': raw_local_importance_values}\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -725,7 +770,9 @@
|
|||||||
"# Print the predicted value\n",
|
"# Print the predicted value\n",
|
||||||
"print('predictions:\\n{}\\n'.format(output['predictions']))\n",
|
"print('predictions:\\n{}\\n'.format(output['predictions']))\n",
|
||||||
"# Print the engineered feature importances for the predicted value\n",
|
"# Print the engineered feature importances for the predicted value\n",
|
||||||
"print('engineered_local_importance_values:\\n{}\\n'.format(output['engineered_local_importance_values']))"
|
"print('engineered_local_importance_values:\\n{}\\n'.format(output['engineered_local_importance_values']))\n",
|
||||||
|
"# Print the raw feature importances for the predicted value\n",
|
||||||
|
"print('raw_local_importance_values:\\n{}\\n'.format(output['raw_local_importance_values']))\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -773,7 +820,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "anumamah"
|
"name": "ratanase"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -42,8 +42,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
|
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade) \n",
|
|
||||||
"\n",
|
|
||||||
"In this notebook you will learn how to:\n",
|
"In this notebook you will learn how to:\n",
|
||||||
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
||||||
"2. Instantiating AutoMLConfig with FeaturizationConfig for customization\n",
|
"2. Instantiating AutoMLConfig with FeaturizationConfig for customization\n",
|
||||||
@@ -98,7 +96,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -223,9 +221,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Customization\n",
|
"## Customization\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade). \n",
|
|
||||||
"\n",
|
|
||||||
"Supported customization includes:\n",
|
"Supported customization includes:\n",
|
||||||
|
"\n",
|
||||||
"1. Column purpose update: Override feature type for the specified column.\n",
|
"1. Column purpose update: Override feature type for the specified column.\n",
|
||||||
"2. Transformer parameter update: Update parameters for the specified transformer. Currently supports Imputer and HashOneHotEncoder.\n",
|
"2. Transformer parameter update: Update parameters for the specified transformer. Currently supports Imputer and HashOneHotEncoder.\n",
|
||||||
"3. Drop columns: Columns to drop from being featurized.\n",
|
"3. Drop columns: Columns to drop from being featurized.\n",
|
||||||
@@ -447,7 +444,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Explanations\n",
|
"## Explanations\n",
|
||||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade). \n",
|
|
||||||
"This section will walk you through the workflow to compute model explanations for an AutoML model on your remote compute.\n",
|
"This section will walk you through the workflow to compute model explanations for an AutoML model on your remote compute.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Retrieve any AutoML Model for explanations\n",
|
"### Retrieve any AutoML Model for explanations\n",
|
||||||
@@ -655,7 +651,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Operationailze\n",
|
"## Operationalize\n",
|
||||||
"In this section we will show how you can operationalize an AutoML model and the explainer which was used to compute the explanations in the previous section.\n",
|
"In this section we will show how you can operationalize an AutoML model and the explainer which was used to compute the explanations in the previous section.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Register the AutoML model and the scoring explainer\n",
|
"### Register the AutoML model and the scoring explainer\n",
|
||||||
@@ -905,7 +901,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "anumamah"
|
"name": "anshirga"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"categories": [
|
"categories": [
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import os
|
|||||||
import joblib
|
import joblib
|
||||||
|
|
||||||
from interpret.ext.glassbox import LGBMExplainableModel
|
from interpret.ext.glassbox import LGBMExplainableModel
|
||||||
from automl.client.core.common.constants import MODEL_PATH
|
from azureml.automl.core.shared.constants import MODEL_PATH
|
||||||
from azureml.core.experiment import Experiment
|
from azureml.core.experiment import Experiment
|
||||||
from azureml.core.dataset import Dataset
|
from azureml.core.dataset import Dataset
|
||||||
from azureml.core.run import Run
|
from azureml.core.run import Run
|
||||||
@@ -66,7 +66,8 @@ engineered_explanations = explainer.explain(['local', 'global'], tag='engineered
|
|||||||
# Compute the raw explanations
|
# Compute the raw explanations
|
||||||
raw_explanations = explainer.explain(['local', 'global'], get_raw=True, tag='raw explanations',
|
raw_explanations = explainer.explain(['local', 'global'], get_raw=True, tag='raw explanations',
|
||||||
raw_feature_names=automl_explainer_setup_obj.raw_feature_names,
|
raw_feature_names=automl_explainer_setup_obj.raw_feature_names,
|
||||||
eval_dataset=automl_explainer_setup_obj.X_test_transform)
|
eval_dataset=automl_explainer_setup_obj.X_test_transform,
|
||||||
|
raw_eval_dataset=automl_explainer_setup_obj.X_test_raw)
|
||||||
|
|
||||||
print("Engineered and raw explanations computed successfully")
|
print("Engineered and raw explanations computed successfully")
|
||||||
|
|
||||||
|
|||||||
@@ -92,7 +92,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -462,7 +462,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "rakellam"
|
"name": "ratanase"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"categories": [
|
"categories": [
|
||||||
|
|||||||
@@ -276,21 +276,24 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.exceptions import ComputeTargetException\n",
|
"from azureml.core.compute import ComputeTarget, AksCompute\n",
|
||||||
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
"\n",
|
"\n",
|
||||||
"aks_name = \"my-aks\"\n",
|
"aks_name = \"my-aks-insights\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"creating_compute = False\n",
|
||||||
"try:\n",
|
"try:\n",
|
||||||
" aks_target = ComputeTarget(ws, aks_name)\n",
|
" aks_target = ComputeTarget(ws, aks_name)\n",
|
||||||
" print(\"Using existing AKS cluster {}.\".format(aks_name))\n",
|
" print(\"Using existing AKS compute target {}.\".format(aks_name))\n",
|
||||||
"except ComputeTargetException:\n",
|
"except ComputeTargetException:\n",
|
||||||
" print(\"Creating a new AKS cluster {}.\".format(aks_name))\n",
|
" print(\"Creating a new AKS compute target {}.\".format(aks_name))\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Use the default configuration (can also provide parameters to customize).\n",
|
" # Use the default configuration (can also provide parameters to customize).\n",
|
||||||
" prov_config = AksCompute.provisioning_configuration()\n",
|
" prov_config = AksCompute.provisioning_configuration()\n",
|
||||||
" aks_target = ComputeTarget.create(workspace=ws,\n",
|
" aks_target = ComputeTarget.create(workspace=ws,\n",
|
||||||
" name=aks_name,\n",
|
" name=aks_name,\n",
|
||||||
" provisioning_configuration=prov_config)"
|
" provisioning_configuration=prov_config)\n",
|
||||||
|
" creating_compute = True"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -300,7 +303,7 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"%%time\n",
|
"%%time\n",
|
||||||
"if aks_target.provisioning_state != \"Succeeded\":\n",
|
"if creating_compute and aks_target.provisioning_state != \"Succeeded\":\n",
|
||||||
" aks_target.wait_for_completion(show_output=True)"
|
" aks_target.wait_for_completion(show_output=True)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -380,7 +383,7 @@
|
|||||||
" aks_service.wait_for_deployment(show_output=True)\n",
|
" aks_service.wait_for_deployment(show_output=True)\n",
|
||||||
" print(aks_service.state)\n",
|
" print(aks_service.state)\n",
|
||||||
"else:\n",
|
"else:\n",
|
||||||
" raise ValueError(\"AKS provisioning failed. Error: \", aks_service.error)"
|
" raise ValueError(\"AKS cluster provisioning failed. Error: \", aks_target.provisioning_errors)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -458,7 +461,9 @@
|
|||||||
"%%time\n",
|
"%%time\n",
|
||||||
"aks_service.delete()\n",
|
"aks_service.delete()\n",
|
||||||
"aci_service.delete()\n",
|
"aci_service.delete()\n",
|
||||||
"model.delete()"
|
"model.delete()\n",
|
||||||
|
"if creating_compute:\n",
|
||||||
|
" aks_target.delete()"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -226,7 +226,7 @@
|
|||||||
"# Leaf domain label generates a name using the formula\n",
|
"# Leaf domain label generates a name using the formula\n",
|
||||||
"# \"<leaf-domain-label>######.<azure-region>.cloudapp.azure.net\"\n",
|
"# \"<leaf-domain-label>######.<azure-region>.cloudapp.azure.net\"\n",
|
||||||
"# where \"######\" is a random series of characters\n",
|
"# where \"######\" is a random series of characters\n",
|
||||||
"provisioning_config.enable_ssl(leaf_domain_label = \"contoso\")\n",
|
"provisioning_config.enable_ssl(leaf_domain_label = \"contoso\", overwrite_existing_domain = True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"aks_name = 'my-aks-ssl-1' \n",
|
"aks_name = 'my-aks-ssl-1' \n",
|
||||||
"# Create the cluster\n",
|
"# Create the cluster\n",
|
||||||
|
|||||||
@@ -23,7 +23,7 @@
|
|||||||
"# Train and explain models remotely via Azure Machine Learning Compute\n",
|
"# Train and explain models remotely via Azure Machine Learning Compute\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"_**This notebook showcases how to use the Azure Machine Learning Interpretability SDK to train and explain a regression model remotely on an Azure Machine Leanrning Compute Target (AMLCompute).**_\n",
|
"_**This notebook showcases how to use the Azure Machine Learning Interpretability SDK to train and explain a regression model remotely on an Azure Machine Learning Compute Target (AMLCompute).**_\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -35,10 +35,7 @@
|
|||||||
" 1. Initialize a Workspace\n",
|
" 1. Initialize a Workspace\n",
|
||||||
" 1. Create an Experiment\n",
|
" 1. Create an Experiment\n",
|
||||||
" 1. Introduction to AmlCompute\n",
|
" 1. Introduction to AmlCompute\n",
|
||||||
" 1. Submit an AmlCompute run in a few different ways\n",
|
" 1. Submit an AmlCompute run\n",
|
||||||
" 1. Option 1: Provision as a run based compute target \n",
|
|
||||||
" 1. Option 2: Provision as a persistent compute target (Basic)\n",
|
|
||||||
" 1. Option 3: Provision as a persistent compute target (Advanced)\n",
|
|
||||||
"1. Additional operations to perform on AmlCompute\n",
|
"1. Additional operations to perform on AmlCompute\n",
|
||||||
"1. [Download model explanations from Azure Machine Learning Run History](#Download)\n",
|
"1. [Download model explanations from Azure Machine Learning Run History](#Download)\n",
|
||||||
"1. [Visualize explanations](#Visualize)\n",
|
"1. [Visualize explanations](#Visualize)\n",
|
||||||
@@ -158,7 +155,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Submit an AmlCompute run in a few different ways\n",
|
"## Submit an AmlCompute run\n",
|
||||||
"\n",
|
"\n",
|
||||||
"First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.\n",
|
"First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -204,7 +201,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Option 1: Provision a compute target (Basic)\n",
|
"### Provision a compute target\n",
|
||||||
"\n",
|
"\n",
|
||||||
"You can provision an AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.\n",
|
"You can provision an AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -327,183 +324,6 @@
|
|||||||
"run.get_metrics()"
|
"run.get_metrics()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Option 2: Provision a compute target (Advanced)\n",
|
|
||||||
"\n",
|
|
||||||
"You can also specify additional properties or change defaults while provisioning AmlCompute using a more advanced configuration. This is useful when you want a dedicated cluster of 4 nodes (for example you can set the min_nodes and max_nodes to 4), or want the compute to be within an existing VNet in your subscription.\n",
|
|
||||||
"\n",
|
|
||||||
"In addition to `vm_size` and `max_nodes`, you can specify:\n",
|
|
||||||
"* `min_nodes`: Minimum nodes (default 0 nodes) to downscale to while running a job on AmlCompute\n",
|
|
||||||
"* `vm_priority`: Choose between 'dedicated' (default) and 'lowpriority' VMs when provisioning AmlCompute. Low Priority VMs use Azure's excess capacity and are thus cheaper but risk your run being pre-empted\n",
|
|
||||||
"* `idle_seconds_before_scaledown`: Idle time (default 120 seconds) to wait after run completion before auto-scaling to min_nodes\n",
|
|
||||||
"* `vnet_resourcegroup_name`: Resource group of the **existing** VNet within which AmlCompute should be provisioned\n",
|
|
||||||
"* `vnet_name`: Name of VNet\n",
|
|
||||||
"* `subnet_name`: Name of SubNet within the VNet"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
|
||||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
|
||||||
"\n",
|
|
||||||
"# Choose a name for your CPU cluster\n",
|
|
||||||
"cpu_cluster_name = \"cpu-cluster\"\n",
|
|
||||||
"\n",
|
|
||||||
"# Verify that cluster does not exist already\n",
|
|
||||||
"try:\n",
|
|
||||||
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
|
|
||||||
" print('Found existing cluster, use it.')\n",
|
|
||||||
"except ComputeTargetException:\n",
|
|
||||||
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',\n",
|
|
||||||
" vm_priority='lowpriority',\n",
|
|
||||||
" min_nodes=2,\n",
|
|
||||||
" max_nodes=4,\n",
|
|
||||||
" idle_seconds_before_scaledown='300',\n",
|
|
||||||
" vnet_resourcegroup_name='<my-resource-group>',\n",
|
|
||||||
" vnet_name='<my-vnet-name>',\n",
|
|
||||||
" subnet_name='<my-subnet-name>')\n",
|
|
||||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
|
|
||||||
"\n",
|
|
||||||
"cpu_cluster.wait_for_completion(show_output=True)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Configure & Run"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.runconfig import RunConfiguration\n",
|
|
||||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
|
||||||
"\n",
|
|
||||||
"# Create a new RunConfig object\n",
|
|
||||||
"run_config = RunConfiguration(framework=\"python\")\n",
|
|
||||||
"\n",
|
|
||||||
"# Set compute target to AmlCompute target created in previous step\n",
|
|
||||||
"run_config.target = cpu_cluster.name\n",
|
|
||||||
"\n",
|
|
||||||
"# Enable Docker \n",
|
|
||||||
"run_config.environment.docker.enabled = True\n",
|
|
||||||
"\n",
|
|
||||||
"azureml_pip_packages = [\n",
|
|
||||||
" 'azureml-defaults', 'azureml-contrib-interpret', 'azureml-telemetry', 'azureml-interpret'\n",
|
|
||||||
"]\n",
|
|
||||||
"\n",
|
|
||||||
"\n",
|
|
||||||
"\n",
|
|
||||||
"# Note: this is to pin the scikit-learn and pandas versions to be same as notebook.\n",
|
|
||||||
"# In production scenario user would choose their dependencies\n",
|
|
||||||
"import pkg_resources\n",
|
|
||||||
"available_packages = pkg_resources.working_set\n",
|
|
||||||
"sklearn_ver = None\n",
|
|
||||||
"pandas_ver = None\n",
|
|
||||||
"for dist in available_packages:\n",
|
|
||||||
" if dist.key == 'scikit-learn':\n",
|
|
||||||
" sklearn_ver = dist.version\n",
|
|
||||||
" elif dist.key == 'pandas':\n",
|
|
||||||
" pandas_ver = dist.version\n",
|
|
||||||
"sklearn_dep = 'scikit-learn'\n",
|
|
||||||
"pandas_dep = 'pandas'\n",
|
|
||||||
"if sklearn_ver:\n",
|
|
||||||
" sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)\n",
|
|
||||||
"if pandas_ver:\n",
|
|
||||||
" pandas_dep = 'pandas=={}'.format(pandas_ver)\n",
|
|
||||||
"# Specify CondaDependencies obj\n",
|
|
||||||
"# The CondaDependencies specifies the conda and pip packages that are installed in the environment\n",
|
|
||||||
"# the submitted job is run in. Note the remote environment(s) needs to be similar to the local\n",
|
|
||||||
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
|
|
||||||
"# cause errors. Please take extra care when specifying your dependencies in a production environment.\n",
|
|
||||||
"azureml_pip_packages.extend([sklearn_dep, pandas_dep])\n",
|
|
||||||
"run_config.environment.python.conda_dependencies = CondaDependencies.create(pip_packages=azureml_pip_packages)\n",
|
|
||||||
"\n",
|
|
||||||
"from azureml.core import Run\n",
|
|
||||||
"from azureml.core import ScriptRunConfig\n",
|
|
||||||
"\n",
|
|
||||||
"src = ScriptRunConfig(source_directory=project_folder, \n",
|
|
||||||
" script='train_explain.py', \n",
|
|
||||||
" run_config=run_config) \n",
|
|
||||||
"run = experiment.submit(config=src)\n",
|
|
||||||
"run"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"%%time\n",
|
|
||||||
"# Shows output of the run on stdout.\n",
|
|
||||||
"run.wait_for_completion(show_output=True)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"run.get_metrics()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Additional operations to perform on AmlCompute\n",
|
|
||||||
"\n",
|
|
||||||
"You can perform more operations on AmlCompute such as updating the node counts or deleting the compute. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Get_status () gets the latest status of the AmlCompute target\n",
|
|
||||||
"cpu_cluster.get_status().serialize()\n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Update () takes in the min_nodes, max_nodes and idle_seconds_before_scaledown and updates the AmlCompute target\n",
|
|
||||||
"# cpu_cluster.update(min_nodes=1)\n",
|
|
||||||
"# cpu_cluster.update(max_nodes=10)\n",
|
|
||||||
"cpu_cluster.update(idle_seconds_before_scaledown=300)\n",
|
|
||||||
"# cpu_cluster.update(min_nodes=2, max_nodes=4, idle_seconds_before_scaledown=600)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Delete () is used to deprovision and delete the AmlCompute target. Useful if you want to re-use the compute name \n",
|
|
||||||
"# 'cpu-cluster' in this case but use a different VM family for instance.\n",
|
|
||||||
"\n",
|
|
||||||
"# cpu_cluster.delete()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|||||||
@@ -168,7 +168,7 @@
|
|||||||
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
|
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
"#upload input data to workspaceblobstore\n",
|
"#upload input data to workspaceblobstore\n",
|
||||||
"def_blob_store.upload_files(files=['20news.pkl'], target_path='20newsgroups')"
|
"def_blob_store.upload_files(files=['20news.pkl'], target_path='20newsgroups', overwrite=True)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -232,7 +232,7 @@
|
|||||||
" max_nodes=4)\n",
|
" max_nodes=4)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"Azure Machine Learning Compute attached\")\n",
|
"print(\"Azure Machine Learning Compute attached\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -249,7 +249,7 @@
|
|||||||
" max_nodes=4)\n",
|
" max_nodes=4)\n",
|
||||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
|
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
|
||||||
" \n",
|
" \n",
|
||||||
" cpu_cluster.wait_for_completion(show_output=True)"
|
"cpu_cluster.wait_for_completion(show_output=True)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -19,8 +19,8 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# How to Setup a Schedule for a Published Pipeline\n",
|
"# How to Setup a Schedule for a Published Pipeline or Pipeline Endpoint\n",
|
||||||
"In this notebook, we will show you how you can run an already published pipeline on a schedule."
|
"In this notebook, we will show you how you can run an already published pipeline or a pipeline endpoint on a schedule."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -159,6 +159,43 @@
|
|||||||
"print(\"Newly published pipeline id: {}\".format(published_pipeline1.id))"
|
"print(\"Newly published pipeline id: {}\".format(published_pipeline1.id))"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"nteract": {
|
||||||
|
"transient": {
|
||||||
|
"deleting": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"### Create a Pipeline Endpoint\n",
|
||||||
|
"Alternatively, you can create a schedule to run a pipeline endpoint instead of a published pipeline. You will need this to create a schedule against a pipeline endpoint in the last section of this notebook. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"jupyter": {
|
||||||
|
"outputs_hidden": false,
|
||||||
|
"source_hidden": false
|
||||||
|
},
|
||||||
|
"nteract": {
|
||||||
|
"transient": {
|
||||||
|
"deleting": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.pipeline.core import PipelineEndpoint\n",
|
||||||
|
"\n",
|
||||||
|
"pipeline_endpoint = PipelineEndpoint.publish(workspace=ws, name=\"ScheduledPipelineEndpoint\",\n",
|
||||||
|
" pipeline=pipeline1, description=\"Publish pipeline endpoint for schedule test\")\n",
|
||||||
|
"pipeline_endpoint"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -196,14 +233,24 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a schedule for the pipeline using a recurrence\n",
|
"### Create a schedule for the published pipeline using a recurrence\n",
|
||||||
"This schedule will run on a specified recurrence interval."
|
"This schedule will run on a specified recurrence interval."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"metadata": {},
|
"metadata": {
|
||||||
|
"jupyter": {
|
||||||
|
"outputs_hidden": false,
|
||||||
|
"source_hidden": false
|
||||||
|
},
|
||||||
|
"nteract": {
|
||||||
|
"transient": {
|
||||||
|
"deleting": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.pipeline.core.schedule import ScheduleRecurrence, Schedule\n",
|
"from azureml.pipeline.core.schedule import ScheduleRecurrence, Schedule\n",
|
||||||
@@ -308,7 +355,11 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"metadata": {},
|
"metadata": {
|
||||||
|
"gather": {
|
||||||
|
"logged": 1606157800044
|
||||||
|
}
|
||||||
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
|
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
|
||||||
@@ -410,7 +461,11 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"metadata": {},
|
"metadata": {
|
||||||
|
"gather": {
|
||||||
|
"logged": 1606157862620
|
||||||
|
}
|
||||||
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
|
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
|
||||||
@@ -419,14 +474,151 @@
|
|||||||
"schedule = Schedule.get(ws, schedule_id)\n",
|
"schedule = Schedule.get(ws, schedule_id)\n",
|
||||||
"print(\"Disabled schedule {}. New status is: {}\".format(schedule.id, schedule.status))"
|
"print(\"Disabled schedule {}. New status is: {}\".format(schedule.id, schedule.status))"
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"nteract": {
|
||||||
|
"transient": {
|
||||||
|
"deleting": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"### Create a schedule for a pipeline endpoint\n",
|
||||||
|
"Alternative to creating schedules for a published pipeline, you can also create schedules to run pipeline endpoints.\n",
|
||||||
|
"Retrieve the pipeline endpoint id to create a schedule. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"gather": {
|
||||||
|
"logged": 1606157888851
|
||||||
|
},
|
||||||
|
"jupyter": {
|
||||||
|
"outputs_hidden": false,
|
||||||
|
"source_hidden": false
|
||||||
|
},
|
||||||
|
"nteract": {
|
||||||
|
"transient": {
|
||||||
|
"deleting": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"pipeline_endpoint_by_name = PipelineEndpoint.get(workspace=ws, name=\"ScheduledPipelineEndpoint\")\n",
|
||||||
|
"published_pipeline_endpoint_id = pipeline_endpoint_by_name.id\n",
|
||||||
|
"\n",
|
||||||
|
"recurrence = ScheduleRecurrence(frequency=\"Day\", interval=2, hours=[22], minutes=[30]) # Runs every other day at 10:30pm\n",
|
||||||
|
"\n",
|
||||||
|
"schedule = Schedule.create_for_pipeline_endpoint(workspace=ws, name=\"My_Endpoint_Schedule\",\n",
|
||||||
|
" pipeline_endpoint_id=published_pipeline_endpoint_id,\n",
|
||||||
|
" experiment_name='Schedule_Run',\n",
|
||||||
|
" recurrence=recurrence, description=\"Schedule_Run\",\n",
|
||||||
|
" wait_for_provisioning=True)\n",
|
||||||
|
"\n",
|
||||||
|
"# You may want to make sure that the schedule is provisioned properly\n",
|
||||||
|
"# before making any further changes to the schedule\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Created schedule with id: {}\".format(schedule.id))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"nteract": {
|
||||||
|
"transient": {
|
||||||
|
"deleting": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"### Get all schedules for a given pipeline endpoint\n",
|
||||||
|
"Once you have the pipeline endpoint ID, then you can get all schedules for that pipeline endopint."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"jupyter": {
|
||||||
|
"outputs_hidden": false,
|
||||||
|
"source_hidden": false
|
||||||
|
},
|
||||||
|
"nteract": {
|
||||||
|
"transient": {
|
||||||
|
"deleting": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"schedules_for_pipeline_endpoints = Schedule.\\\n",
|
||||||
|
" get_schedules_for_pipeline_endpoint_id(ws,\n",
|
||||||
|
" pipeline_endpoint_id=published_pipeline_endpoint_id)\n",
|
||||||
|
"print('Got all schedules for pipeline endpoint:', published_pipeline_endpoint_id, 'Count:',\n",
|
||||||
|
" len(schedules_for_pipeline_endpoints))\n",
|
||||||
|
"\n",
|
||||||
|
"print('done')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"nteract": {
|
||||||
|
"transient": {
|
||||||
|
"deleting": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"### Disable the schedule created for running the pipeline endpont\n",
|
||||||
|
"Recall the best practice of disabling schedules when not in use.\n",
|
||||||
|
"The number of schedule triggers allowed per month per region per subscription is 100,000.\n",
|
||||||
|
"This is calculated using the project trigger counts for all active schedules."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"jupyter": {
|
||||||
|
"outputs_hidden": false,
|
||||||
|
"source_hidden": false
|
||||||
|
},
|
||||||
|
"nteract": {
|
||||||
|
"transient": {
|
||||||
|
"deleting": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
|
||||||
|
"print(\"Using schedule with id: {}\".format(fetched_schedule.id))\n",
|
||||||
|
"\n",
|
||||||
|
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
|
||||||
|
"# for the call to provision the schedule in the backend.\n",
|
||||||
|
"fetched_schedule.disable(wait_for_provisioning=True)\n",
|
||||||
|
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
|
||||||
|
"print(\"Disabled schedule {}. New status is: {}\".format(fetched_schedule.id, fetched_schedule.status))"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "sanpil"
|
"name": "shbijlan"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
"categories": [
|
||||||
|
"how-to-use-azureml",
|
||||||
|
"machine-learning-pipelines",
|
||||||
|
"intro-to-pipelines"
|
||||||
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
"compute": [
|
"compute": [
|
||||||
"AML Compute"
|
"AML Compute"
|
||||||
@@ -441,7 +633,7 @@
|
|||||||
"framework": [
|
"framework": [
|
||||||
"Azure ML"
|
"Azure ML"
|
||||||
],
|
],
|
||||||
"friendly_name": "How to Setup a Schedule for a Published Pipeline",
|
"friendly_name": "How to Setup a Schedule for a Published Pipeline or Pipeline Endpoint",
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3.6",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
@@ -459,6 +651,9 @@
|
|||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.6.7"
|
"version": "3.6.7"
|
||||||
},
|
},
|
||||||
|
"nteract": {
|
||||||
|
"version": "nteract-front-end@1.0.0"
|
||||||
|
},
|
||||||
"order_index": 10,
|
"order_index": 10,
|
||||||
"star_tag": [
|
"star_tag": [
|
||||||
"featured"
|
"featured"
|
||||||
@@ -466,7 +661,7 @@
|
|||||||
"tags": [
|
"tags": [
|
||||||
"None"
|
"None"
|
||||||
],
|
],
|
||||||
"task": "Demonstrates the use of Schedules for Published Pipelines"
|
"task": "Demonstrates the use of Schedules for Published Pipelines and Pipeline endpoints"
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 2
|
"nbformat_minor": 2
|
||||||
|
|||||||
@@ -30,7 +30,7 @@
|
|||||||
"## Introduction\n",
|
"## Introduction\n",
|
||||||
"In this example we showcase how you can use AzureML Dataset to load data for AutoML via AML Pipeline. \n",
|
"In this example we showcase how you can use AzureML Dataset to load data for AutoML via AML Pipeline. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you have executed the [configuration](https://aka.ms/pl-config) before running this notebook.\n",
|
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you have executed the [configuration](https://aka.ms/pl-config) before running this notebook, please also take a look at the [Automated ML setup-using-a-local-conda-environment](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning#setup-using-a-local-conda-environment) section to setup the environment.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In this notebook you will learn how to:\n",
|
"In this notebook you will learn how to:\n",
|
||||||
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
||||||
|
|||||||
@@ -2,7 +2,3 @@ name: aml-pipelines-with-automated-machine-learning-step
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
- azureml-train-automl
|
|
||||||
- azureml-widgets
|
|
||||||
- matplotlib
|
|
||||||
- pandas_ml
|
|
||||||
|
|||||||
@@ -284,7 +284,7 @@
|
|||||||
"# Specify CondaDependencies obj, add necessary packages\n",
|
"# Specify CondaDependencies obj, add necessary packages\n",
|
||||||
"aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(\n",
|
"aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(\n",
|
||||||
" conda_packages=['pandas','scikit-learn'], \n",
|
" conda_packages=['pandas','scikit-learn'], \n",
|
||||||
" pip_packages=['azureml-sdk[automl,explain]', 'pyarrow'])\n",
|
" pip_packages=['azureml-sdk[automl]', 'pyarrow'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print (\"Run configuration created.\")"
|
"print (\"Run configuration created.\")"
|
||||||
]
|
]
|
||||||
@@ -460,8 +460,8 @@
|
|||||||
" name=\"Merge Taxi Data\",\n",
|
" name=\"Merge Taxi Data\",\n",
|
||||||
" script_name=\"merge.py\", \n",
|
" script_name=\"merge.py\", \n",
|
||||||
" arguments=[\"--output_merge\", merged_data],\n",
|
" arguments=[\"--output_merge\", merged_data],\n",
|
||||||
" inputs=[cleansed_green_data.parse_parquet_files(file_extension=None),\n",
|
" inputs=[cleansed_green_data.parse_parquet_files(),\n",
|
||||||
" cleansed_yellow_data.parse_parquet_files(file_extension=None)],\n",
|
" cleansed_yellow_data.parse_parquet_files()],\n",
|
||||||
" outputs=[merged_data],\n",
|
" outputs=[merged_data],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig=aml_run_config,\n",
|
" runconfig=aml_run_config,\n",
|
||||||
@@ -497,7 +497,7 @@
|
|||||||
" name=\"Filter Taxi Data\",\n",
|
" name=\"Filter Taxi Data\",\n",
|
||||||
" script_name=\"filter.py\", \n",
|
" script_name=\"filter.py\", \n",
|
||||||
" arguments=[\"--output_filter\", filtered_data],\n",
|
" arguments=[\"--output_filter\", filtered_data],\n",
|
||||||
" inputs=[merged_data.parse_parquet_files(file_extension=None)],\n",
|
" inputs=[merged_data.parse_parquet_files()],\n",
|
||||||
" outputs=[filtered_data],\n",
|
" outputs=[filtered_data],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig = aml_run_config,\n",
|
" runconfig = aml_run_config,\n",
|
||||||
@@ -533,7 +533,7 @@
|
|||||||
" name=\"Normalize Taxi Data\",\n",
|
" name=\"Normalize Taxi Data\",\n",
|
||||||
" script_name=\"normalize.py\", \n",
|
" script_name=\"normalize.py\", \n",
|
||||||
" arguments=[\"--output_normalize\", normalized_data],\n",
|
" arguments=[\"--output_normalize\", normalized_data],\n",
|
||||||
" inputs=[filtered_data.parse_parquet_files(file_extension=None)],\n",
|
" inputs=[filtered_data.parse_parquet_files()],\n",
|
||||||
" outputs=[normalized_data],\n",
|
" outputs=[normalized_data],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig = aml_run_config,\n",
|
" runconfig = aml_run_config,\n",
|
||||||
@@ -574,7 +574,7 @@
|
|||||||
" name=\"Transform Taxi Data\",\n",
|
" name=\"Transform Taxi Data\",\n",
|
||||||
" script_name=\"transform.py\", \n",
|
" script_name=\"transform.py\", \n",
|
||||||
" arguments=[\"--output_transform\", transformed_data],\n",
|
" arguments=[\"--output_transform\", transformed_data],\n",
|
||||||
" inputs=[normalized_data.parse_parquet_files(file_extension=None)],\n",
|
" inputs=[normalized_data.parse_parquet_files()],\n",
|
||||||
" outputs=[transformed_data],\n",
|
" outputs=[transformed_data],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig = aml_run_config,\n",
|
" runconfig = aml_run_config,\n",
|
||||||
@@ -614,7 +614,7 @@
|
|||||||
" script_name=\"train_test_split.py\", \n",
|
" script_name=\"train_test_split.py\", \n",
|
||||||
" arguments=[\"--output_split_train\", output_split_train,\n",
|
" arguments=[\"--output_split_train\", output_split_train,\n",
|
||||||
" \"--output_split_test\", output_split_test],\n",
|
" \"--output_split_test\", output_split_test],\n",
|
||||||
" inputs=[transformed_data.parse_parquet_files(file_extension=None)],\n",
|
" inputs=[transformed_data.parse_parquet_files()],\n",
|
||||||
" outputs=[output_split_train, output_split_test],\n",
|
" outputs=[output_split_train, output_split_test],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig = aml_run_config,\n",
|
" runconfig = aml_run_config,\n",
|
||||||
@@ -690,7 +690,7 @@
|
|||||||
" \"n_cross_validations\": 5\n",
|
" \"n_cross_validations\": 5\n",
|
||||||
"}\n",
|
"}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"training_dataset = output_split_train.parse_parquet_files(file_extension=None).keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])\n",
|
"training_dataset = output_split_train.parse_parquet_files().keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"automl_config = AutoMLConfig(task = 'regression',\n",
|
"automl_config = AutoMLConfig(task = 'regression',\n",
|
||||||
" debug_log = 'automated_ml_errors.log',\n",
|
" debug_log = 'automated_ml_errors.log',\n",
|
||||||
|
|||||||
@@ -180,7 +180,9 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a FileDataset\n",
|
"### Create a FileDataset\n",
|
||||||
"A [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred."
|
"A [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred.",
|
||||||
|
"\n",
|
||||||
|
"You can use dataset objects as inputs. Register the datasets to the workspace if you want to reuse them later."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -160,7 +160,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a TabularDataset\n",
|
"### Create a TabularDataset\n",
|
||||||
"A [TabularDataSet](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) references single or multiple files which contain data in a tabular structure (ie like CSV files) in your datastores or public urls. TabularDatasets provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred."
|
"A [TabularDataSet](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) references single or multiple files which contain data in a tabular structure (ie like CSV files) in your datastores or public urls. TabularDatasets provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred.\n",
|
||||||
|
"You can use dataset objects as inputs. Register the datasets to the workspace if you want to reuse them later."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -175,8 +176,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"path_on_datastore = iris_data.path('iris/')\n",
|
"path_on_datastore = iris_data.path('iris/')\n",
|
||||||
"input_iris_ds = Dataset.Tabular.from_delimited_files(path=path_on_datastore, validate=False)\n",
|
"input_iris_ds = Dataset.Tabular.from_delimited_files(path=path_on_datastore, validate=False)\n",
|
||||||
"registered_iris_ds = input_iris_ds.register(ws, iris_ds_name, create_new_version=True)\n",
|
"named_iris_ds = input_iris_ds.as_named_input(iris_ds_name)"
|
||||||
"named_iris_ds = registered_iris_ds.as_named_input(iris_ds_name)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -1,185 +0,0 @@
|
|||||||
# Original source: https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py
|
|
||||||
import argparse
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import re
|
|
||||||
|
|
||||||
from PIL import Image
|
|
||||||
import torch
|
|
||||||
from torchvision import transforms
|
|
||||||
|
|
||||||
|
|
||||||
def load_image(filename, size=None, scale=None):
|
|
||||||
img = Image.open(filename)
|
|
||||||
if size is not None:
|
|
||||||
img = img.resize((size, size), Image.ANTIALIAS)
|
|
||||||
elif scale is not None:
|
|
||||||
img = img.resize((int(img.size[0] / scale), int(img.size[1] / scale)), Image.ANTIALIAS)
|
|
||||||
return img
|
|
||||||
|
|
||||||
|
|
||||||
def save_image(filename, data):
|
|
||||||
img = data.clone().clamp(0, 255).numpy()
|
|
||||||
img = img.transpose(1, 2, 0).astype("uint8")
|
|
||||||
img = Image.fromarray(img)
|
|
||||||
img.save(filename)
|
|
||||||
|
|
||||||
|
|
||||||
class TransformerNet(torch.nn.Module):
|
|
||||||
def __init__(self):
|
|
||||||
super(TransformerNet, self).__init__()
|
|
||||||
# Initial convolution layers
|
|
||||||
self.conv1 = ConvLayer(3, 32, kernel_size=9, stride=1)
|
|
||||||
self.in1 = torch.nn.InstanceNorm2d(32, affine=True)
|
|
||||||
self.conv2 = ConvLayer(32, 64, kernel_size=3, stride=2)
|
|
||||||
self.in2 = torch.nn.InstanceNorm2d(64, affine=True)
|
|
||||||
self.conv3 = ConvLayer(64, 128, kernel_size=3, stride=2)
|
|
||||||
self.in3 = torch.nn.InstanceNorm2d(128, affine=True)
|
|
||||||
# Residual layers
|
|
||||||
self.res1 = ResidualBlock(128)
|
|
||||||
self.res2 = ResidualBlock(128)
|
|
||||||
self.res3 = ResidualBlock(128)
|
|
||||||
self.res4 = ResidualBlock(128)
|
|
||||||
self.res5 = ResidualBlock(128)
|
|
||||||
# Upsampling Layers
|
|
||||||
self.deconv1 = UpsampleConvLayer(128, 64, kernel_size=3, stride=1, upsample=2)
|
|
||||||
self.in4 = torch.nn.InstanceNorm2d(64, affine=True)
|
|
||||||
self.deconv2 = UpsampleConvLayer(64, 32, kernel_size=3, stride=1, upsample=2)
|
|
||||||
self.in5 = torch.nn.InstanceNorm2d(32, affine=True)
|
|
||||||
self.deconv3 = ConvLayer(32, 3, kernel_size=9, stride=1)
|
|
||||||
# Non-linearities
|
|
||||||
self.relu = torch.nn.ReLU()
|
|
||||||
|
|
||||||
def forward(self, X):
|
|
||||||
y = self.relu(self.in1(self.conv1(X)))
|
|
||||||
y = self.relu(self.in2(self.conv2(y)))
|
|
||||||
y = self.relu(self.in3(self.conv3(y)))
|
|
||||||
y = self.res1(y)
|
|
||||||
y = self.res2(y)
|
|
||||||
y = self.res3(y)
|
|
||||||
y = self.res4(y)
|
|
||||||
y = self.res5(y)
|
|
||||||
y = self.relu(self.in4(self.deconv1(y)))
|
|
||||||
y = self.relu(self.in5(self.deconv2(y)))
|
|
||||||
y = self.deconv3(y)
|
|
||||||
return y
|
|
||||||
|
|
||||||
|
|
||||||
class ConvLayer(torch.nn.Module):
|
|
||||||
def __init__(self, in_channels, out_channels, kernel_size, stride):
|
|
||||||
super(ConvLayer, self).__init__()
|
|
||||||
reflection_padding = kernel_size // 2
|
|
||||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
|
||||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
out = self.reflection_pad(x)
|
|
||||||
out = self.conv2d(out)
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
class ResidualBlock(torch.nn.Module):
|
|
||||||
"""ResidualBlock
|
|
||||||
introduced in: https://arxiv.org/abs/1512.03385
|
|
||||||
recommended architecture: http://torch.ch/blog/2016/02/04/resnets.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, channels):
|
|
||||||
super(ResidualBlock, self).__init__()
|
|
||||||
self.conv1 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
|
||||||
self.in1 = torch.nn.InstanceNorm2d(channels, affine=True)
|
|
||||||
self.conv2 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
|
||||||
self.in2 = torch.nn.InstanceNorm2d(channels, affine=True)
|
|
||||||
self.relu = torch.nn.ReLU()
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
residual = x
|
|
||||||
out = self.relu(self.in1(self.conv1(x)))
|
|
||||||
out = self.in2(self.conv2(out))
|
|
||||||
out = out + residual
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
class UpsampleConvLayer(torch.nn.Module):
|
|
||||||
"""UpsampleConvLayer
|
|
||||||
Upsamples the input and then does a convolution. This method gives better results
|
|
||||||
compared to ConvTranspose2d.
|
|
||||||
ref: http://distill.pub/2016/deconv-checkerboard/
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, in_channels, out_channels, kernel_size, stride, upsample=None):
|
|
||||||
super(UpsampleConvLayer, self).__init__()
|
|
||||||
self.upsample = upsample
|
|
||||||
if upsample:
|
|
||||||
self.upsample_layer = torch.nn.Upsample(mode='nearest', scale_factor=upsample)
|
|
||||||
reflection_padding = kernel_size // 2
|
|
||||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
|
||||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
x_in = x
|
|
||||||
if self.upsample:
|
|
||||||
x_in = self.upsample_layer(x_in)
|
|
||||||
out = self.reflection_pad(x_in)
|
|
||||||
out = self.conv2d(out)
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
def stylize(args):
|
|
||||||
device = torch.device("cuda" if args.cuda else "cpu")
|
|
||||||
with torch.no_grad():
|
|
||||||
style_model = TransformerNet()
|
|
||||||
state_dict = torch.load(os.path.join(args.model_dir, args.style + ".pth"))
|
|
||||||
# remove saved deprecated running_* keys in InstanceNorm from the checkpoint
|
|
||||||
for k in list(state_dict.keys()):
|
|
||||||
if re.search(r'in\d+\.running_(mean|var)$', k):
|
|
||||||
del state_dict[k]
|
|
||||||
style_model.load_state_dict(state_dict)
|
|
||||||
style_model.to(device)
|
|
||||||
|
|
||||||
filenames = os.listdir(args.content_dir)
|
|
||||||
|
|
||||||
for filename in filenames:
|
|
||||||
print("Processing {}".format(filename))
|
|
||||||
full_path = os.path.join(args.content_dir, filename)
|
|
||||||
content_image = load_image(full_path, scale=args.content_scale)
|
|
||||||
content_transform = transforms.Compose([
|
|
||||||
transforms.ToTensor(),
|
|
||||||
transforms.Lambda(lambda x: x.mul(255))
|
|
||||||
])
|
|
||||||
content_image = content_transform(content_image)
|
|
||||||
content_image = content_image.unsqueeze(0).to(device)
|
|
||||||
|
|
||||||
output = style_model(content_image).cpu()
|
|
||||||
|
|
||||||
output_path = os.path.join(args.output_dir, filename)
|
|
||||||
save_image(output_path, output[0])
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
arg_parser = argparse.ArgumentParser(description="parser for fast-neural-style")
|
|
||||||
|
|
||||||
arg_parser.add_argument("--content-scale", type=float, default=None,
|
|
||||||
help="factor for scaling down the content image")
|
|
||||||
arg_parser.add_argument("--model-dir", type=str, required=True,
|
|
||||||
help="saved model to be used for stylizing the image.")
|
|
||||||
arg_parser.add_argument("--cuda", type=int, required=True,
|
|
||||||
help="set it to 1 for running on GPU, 0 for CPU")
|
|
||||||
arg_parser.add_argument("--style", type=str,
|
|
||||||
help="style name")
|
|
||||||
|
|
||||||
arg_parser.add_argument("--content-dir", type=str, required=True,
|
|
||||||
help="directory holding the images")
|
|
||||||
arg_parser.add_argument("--output-dir", type=str, required=True,
|
|
||||||
help="directory holding the output images")
|
|
||||||
args = arg_parser.parse_args()
|
|
||||||
|
|
||||||
if args.cuda and not torch.cuda.is_available():
|
|
||||||
print("ERROR: cuda is not available, try running on CPU")
|
|
||||||
sys.exit(1)
|
|
||||||
os.makedirs(args.output_dir, exist_ok=True)
|
|
||||||
stylize(args)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,207 +0,0 @@
|
|||||||
# Original source: https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py
|
|
||||||
import argparse
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import re
|
|
||||||
|
|
||||||
from PIL import Image
|
|
||||||
import torch
|
|
||||||
from torchvision import transforms
|
|
||||||
|
|
||||||
from mpi4py import MPI
|
|
||||||
|
|
||||||
|
|
||||||
def load_image(filename, size=None, scale=None):
|
|
||||||
img = Image.open(filename)
|
|
||||||
if size is not None:
|
|
||||||
img = img.resize((size, size), Image.ANTIALIAS)
|
|
||||||
elif scale is not None:
|
|
||||||
img = img.resize((int(img.size[0] / scale), int(img.size[1] / scale)), Image.ANTIALIAS)
|
|
||||||
return img
|
|
||||||
|
|
||||||
|
|
||||||
def save_image(filename, data):
|
|
||||||
img = data.clone().clamp(0, 255).numpy()
|
|
||||||
img = img.transpose(1, 2, 0).astype("uint8")
|
|
||||||
img = Image.fromarray(img)
|
|
||||||
img.save(filename)
|
|
||||||
|
|
||||||
|
|
||||||
class TransformerNet(torch.nn.Module):
|
|
||||||
def __init__(self):
|
|
||||||
super(TransformerNet, self).__init__()
|
|
||||||
# Initial convolution layers
|
|
||||||
self.conv1 = ConvLayer(3, 32, kernel_size=9, stride=1)
|
|
||||||
self.in1 = torch.nn.InstanceNorm2d(32, affine=True)
|
|
||||||
self.conv2 = ConvLayer(32, 64, kernel_size=3, stride=2)
|
|
||||||
self.in2 = torch.nn.InstanceNorm2d(64, affine=True)
|
|
||||||
self.conv3 = ConvLayer(64, 128, kernel_size=3, stride=2)
|
|
||||||
self.in3 = torch.nn.InstanceNorm2d(128, affine=True)
|
|
||||||
# Residual layers
|
|
||||||
self.res1 = ResidualBlock(128)
|
|
||||||
self.res2 = ResidualBlock(128)
|
|
||||||
self.res3 = ResidualBlock(128)
|
|
||||||
self.res4 = ResidualBlock(128)
|
|
||||||
self.res5 = ResidualBlock(128)
|
|
||||||
# Upsampling Layers
|
|
||||||
self.deconv1 = UpsampleConvLayer(128, 64, kernel_size=3, stride=1, upsample=2)
|
|
||||||
self.in4 = torch.nn.InstanceNorm2d(64, affine=True)
|
|
||||||
self.deconv2 = UpsampleConvLayer(64, 32, kernel_size=3, stride=1, upsample=2)
|
|
||||||
self.in5 = torch.nn.InstanceNorm2d(32, affine=True)
|
|
||||||
self.deconv3 = ConvLayer(32, 3, kernel_size=9, stride=1)
|
|
||||||
# Non-linearities
|
|
||||||
self.relu = torch.nn.ReLU()
|
|
||||||
|
|
||||||
def forward(self, X):
|
|
||||||
y = self.relu(self.in1(self.conv1(X)))
|
|
||||||
y = self.relu(self.in2(self.conv2(y)))
|
|
||||||
y = self.relu(self.in3(self.conv3(y)))
|
|
||||||
y = self.res1(y)
|
|
||||||
y = self.res2(y)
|
|
||||||
y = self.res3(y)
|
|
||||||
y = self.res4(y)
|
|
||||||
y = self.res5(y)
|
|
||||||
y = self.relu(self.in4(self.deconv1(y)))
|
|
||||||
y = self.relu(self.in5(self.deconv2(y)))
|
|
||||||
y = self.deconv3(y)
|
|
||||||
return y
|
|
||||||
|
|
||||||
|
|
||||||
class ConvLayer(torch.nn.Module):
|
|
||||||
def __init__(self, in_channels, out_channels, kernel_size, stride):
|
|
||||||
super(ConvLayer, self).__init__()
|
|
||||||
reflection_padding = kernel_size // 2
|
|
||||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
|
||||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
out = self.reflection_pad(x)
|
|
||||||
out = self.conv2d(out)
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
class ResidualBlock(torch.nn.Module):
|
|
||||||
"""ResidualBlock
|
|
||||||
introduced in: https://arxiv.org/abs/1512.03385
|
|
||||||
recommended architecture: http://torch.ch/blog/2016/02/04/resnets.html
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, channels):
|
|
||||||
super(ResidualBlock, self).__init__()
|
|
||||||
self.conv1 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
|
||||||
self.in1 = torch.nn.InstanceNorm2d(channels, affine=True)
|
|
||||||
self.conv2 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
|
||||||
self.in2 = torch.nn.InstanceNorm2d(channels, affine=True)
|
|
||||||
self.relu = torch.nn.ReLU()
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
residual = x
|
|
||||||
out = self.relu(self.in1(self.conv1(x)))
|
|
||||||
out = self.in2(self.conv2(out))
|
|
||||||
out = out + residual
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
class UpsampleConvLayer(torch.nn.Module):
|
|
||||||
"""UpsampleConvLayer
|
|
||||||
Upsamples the input and then does a convolution. This method gives better results
|
|
||||||
compared to ConvTranspose2d.
|
|
||||||
ref: http://distill.pub/2016/deconv-checkerboard/
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, in_channels, out_channels, kernel_size, stride, upsample=None):
|
|
||||||
super(UpsampleConvLayer, self).__init__()
|
|
||||||
self.upsample = upsample
|
|
||||||
if upsample:
|
|
||||||
self.upsample_layer = torch.nn.Upsample(mode='nearest', scale_factor=upsample)
|
|
||||||
reflection_padding = kernel_size // 2
|
|
||||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
|
||||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
x_in = x
|
|
||||||
if self.upsample:
|
|
||||||
x_in = self.upsample_layer(x_in)
|
|
||||||
out = self.reflection_pad(x_in)
|
|
||||||
out = self.conv2d(out)
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
def stylize(args, comm):
|
|
||||||
|
|
||||||
rank = comm.Get_rank()
|
|
||||||
size = comm.Get_size()
|
|
||||||
|
|
||||||
device = torch.device("cuda" if args.cuda else "cpu")
|
|
||||||
with torch.no_grad():
|
|
||||||
style_model = TransformerNet()
|
|
||||||
state_dict = torch.load(os.path.join(args.model_dir, args.style + ".pth"))
|
|
||||||
# remove saved deprecated running_* keys in InstanceNorm from the checkpoint
|
|
||||||
for k in list(state_dict.keys()):
|
|
||||||
if re.search(r'in\d+\.running_(mean|var)$', k):
|
|
||||||
del state_dict[k]
|
|
||||||
style_model.load_state_dict(state_dict)
|
|
||||||
style_model.to(device)
|
|
||||||
|
|
||||||
filenames = os.listdir(args.content_dir)
|
|
||||||
filenames = sorted(filenames)
|
|
||||||
partition_size = len(filenames) // size
|
|
||||||
partitioned_filenames = filenames[rank * partition_size: (rank + 1) * partition_size]
|
|
||||||
print("RANK {} - is processing {} images out of the total {}".format(rank, len(partitioned_filenames),
|
|
||||||
len(filenames)))
|
|
||||||
|
|
||||||
output_paths = []
|
|
||||||
for filename in partitioned_filenames:
|
|
||||||
# print("Processing {}".format(filename))
|
|
||||||
full_path = os.path.join(args.content_dir, filename)
|
|
||||||
content_image = load_image(full_path, scale=args.content_scale)
|
|
||||||
content_transform = transforms.Compose([
|
|
||||||
transforms.ToTensor(),
|
|
||||||
transforms.Lambda(lambda x: x.mul(255))
|
|
||||||
])
|
|
||||||
content_image = content_transform(content_image)
|
|
||||||
content_image = content_image.unsqueeze(0).to(device)
|
|
||||||
|
|
||||||
output = style_model(content_image).cpu()
|
|
||||||
|
|
||||||
output_path = os.path.join(args.output_dir, filename)
|
|
||||||
save_image(output_path, output[0])
|
|
||||||
|
|
||||||
output_paths.append(output_path)
|
|
||||||
|
|
||||||
print("RANK {} - number of pre-aggregated output files {}".format(rank, len(output_paths)))
|
|
||||||
|
|
||||||
output_paths_list = comm.gather(output_paths, root=0)
|
|
||||||
|
|
||||||
if rank == 0:
|
|
||||||
print("RANK {} - number of aggregated output files {}".format(rank, len(output_paths_list)))
|
|
||||||
print("RANK {} - end".format(rank))
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
arg_parser = argparse.ArgumentParser(description="parser for fast-neural-style")
|
|
||||||
|
|
||||||
arg_parser.add_argument("--content-scale", type=float, default=None,
|
|
||||||
help="factor for scaling down the content image")
|
|
||||||
arg_parser.add_argument("--model-dir", type=str, required=True,
|
|
||||||
help="saved model to be used for stylizing the image.")
|
|
||||||
arg_parser.add_argument("--cuda", type=int, required=True,
|
|
||||||
help="set it to 1 for running on GPU, 0 for CPU")
|
|
||||||
arg_parser.add_argument("--style", type=str, help="style name")
|
|
||||||
arg_parser.add_argument("--content-dir", type=str, required=True,
|
|
||||||
help="directory holding the images")
|
|
||||||
arg_parser.add_argument("--output-dir", type=str, required=True,
|
|
||||||
help="directory holding the output images")
|
|
||||||
args = arg_parser.parse_args()
|
|
||||||
|
|
||||||
comm = MPI.COMM_WORLD
|
|
||||||
|
|
||||||
if args.cuda and not torch.cuda.is_available():
|
|
||||||
print("ERROR: cuda is not available, try running on CPU")
|
|
||||||
sys.exit(1)
|
|
||||||
os.makedirs(args.output_dir, exist_ok=True)
|
|
||||||
stylize(args, comm)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,728 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
||||||
"\n",
|
|
||||||
"Licensed under the MIT License."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
""
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Neural style transfer on video\n",
|
|
||||||
"Using modified code from `pytorch`'s neural style [example](https://pytorch.org/tutorials/advanced/neural_style_tutorial.html), we show how to setup a pipeline for doing style transfer on video. The pipeline has following steps:\n",
|
|
||||||
"1. Split a video into images\n",
|
|
||||||
"2. Run neural style on each image using one of the provided models (from `pytorch` pretrained models for this example).\n",
|
|
||||||
"3. Stitch the image back into a video."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Prerequisites\n",
|
|
||||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Initialize Workspace\n",
|
|
||||||
"\n",
|
|
||||||
"Initialize a workspace object from persisted configuration."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import os\n",
|
|
||||||
"from azureml.core import Workspace, Experiment\n",
|
|
||||||
"\n",
|
|
||||||
"ws = Workspace.from_config()\n",
|
|
||||||
"print('Workspace name: ' + ws.name, \n",
|
|
||||||
" 'Azure region: ' + ws.location, \n",
|
|
||||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
|
||||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
|
|
||||||
"\n",
|
|
||||||
"scripts_folder = \"mpi_scripts\"\n",
|
|
||||||
"\n",
|
|
||||||
"if not os.path.isdir(scripts_folder):\n",
|
|
||||||
" os.mkdir(scripts_folder)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
|
|
||||||
"from azureml.core.datastore import Datastore\n",
|
|
||||||
"from azureml.data.data_reference import DataReference\n",
|
|
||||||
"from azureml.pipeline.core import Pipeline, PipelineData\n",
|
|
||||||
"from azureml.pipeline.steps import PythonScriptStep, MpiStep\n",
|
|
||||||
"from azureml.core.runconfig import CondaDependencies, RunConfiguration\n",
|
|
||||||
"from azureml.core.compute_target import ComputeTargetException"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Create or use existing compute"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# AmlCompute\n",
|
|
||||||
"cpu_cluster_name = \"cpu-cluster\"\n",
|
|
||||||
"try:\n",
|
|
||||||
" cpu_cluster = AmlCompute(ws, cpu_cluster_name)\n",
|
|
||||||
" print(\"found existing cluster.\")\n",
|
|
||||||
"except ComputeTargetException:\n",
|
|
||||||
" print(\"creating new cluster\")\n",
|
|
||||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_v2\",\n",
|
|
||||||
" max_nodes = 1)\n",
|
|
||||||
"\n",
|
|
||||||
" # create the cluster\n",
|
|
||||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, provisioning_config)\n",
|
|
||||||
" cpu_cluster.wait_for_completion(show_output=True)\n",
|
|
||||||
" \n",
|
|
||||||
"# AmlCompute\n",
|
|
||||||
"gpu_cluster_name = \"gpu-cluster\"\n",
|
|
||||||
"try:\n",
|
|
||||||
" gpu_cluster = AmlCompute(ws, gpu_cluster_name)\n",
|
|
||||||
" print(\"found existing cluster.\")\n",
|
|
||||||
"except ComputeTargetException:\n",
|
|
||||||
" print(\"creating new cluster\")\n",
|
|
||||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_NC6\",\n",
|
|
||||||
" max_nodes = 3)\n",
|
|
||||||
"\n",
|
|
||||||
" # create the cluster\n",
|
|
||||||
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, provisioning_config)\n",
|
|
||||||
" gpu_cluster.wait_for_completion(show_output=True)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Python Scripts\n",
|
|
||||||
"We use an edited version of `neural_style_mpi.py` (original is [here](https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py)). Scripts to split and stitch the video are thin wrappers to calls to `ffmpeg`. These scripts are also located in the \"scripts_folder\".\n",
|
|
||||||
"\n",
|
|
||||||
"We install `ffmpeg` through conda dependencies."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"%%writefile $scripts_folder/process_video.py\n",
|
|
||||||
"import argparse\n",
|
|
||||||
"import glob\n",
|
|
||||||
"import os\n",
|
|
||||||
"import subprocess\n",
|
|
||||||
"\n",
|
|
||||||
"parser = argparse.ArgumentParser(description=\"Process input video\")\n",
|
|
||||||
"parser.add_argument('--input_video', required=True)\n",
|
|
||||||
"parser.add_argument('--output_audio', required=True)\n",
|
|
||||||
"parser.add_argument('--output_images', required=True)\n",
|
|
||||||
"\n",
|
|
||||||
"args = parser.parse_args()\n",
|
|
||||||
"\n",
|
|
||||||
"os.makedirs(args.output_audio, exist_ok=True)\n",
|
|
||||||
"os.makedirs(args.output_images, exist_ok=True)\n",
|
|
||||||
"\n",
|
|
||||||
"subprocess.run(\"ffmpeg -i {} {}/video.aac\"\n",
|
|
||||||
" .format(args.input_video, args.output_audio),\n",
|
|
||||||
" shell=True, check=True\n",
|
|
||||||
" )\n",
|
|
||||||
"\n",
|
|
||||||
"subprocess.run(\"ffmpeg -i {} {}/%05d_video.jpg -hide_banner\"\n",
|
|
||||||
" .format(args.input_video, args.output_images),\n",
|
|
||||||
" shell=True, check=True\n",
|
|
||||||
" )"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"%%writefile $scripts_folder/stitch_video.py\n",
|
|
||||||
"import argparse\n",
|
|
||||||
"import os\n",
|
|
||||||
"import subprocess\n",
|
|
||||||
"\n",
|
|
||||||
"parser = argparse.ArgumentParser(description=\"Process input video\")\n",
|
|
||||||
"parser.add_argument('--images_dir', required=True)\n",
|
|
||||||
"parser.add_argument('--input_audio', required=True)\n",
|
|
||||||
"parser.add_argument('--output_dir', required=True)\n",
|
|
||||||
"\n",
|
|
||||||
"args = parser.parse_args()\n",
|
|
||||||
"\n",
|
|
||||||
"os.makedirs(args.output_dir, exist_ok=True)\n",
|
|
||||||
"\n",
|
|
||||||
"subprocess.run(\"ffmpeg -framerate 30 -i {}/%05d_video.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p \"\n",
|
|
||||||
" \"-y {}/video_without_audio.mp4\"\n",
|
|
||||||
" .format(args.images_dir, args.output_dir),\n",
|
|
||||||
" shell=True, check=True\n",
|
|
||||||
" )\n",
|
|
||||||
"\n",
|
|
||||||
"subprocess.run(\"ffmpeg -i {}/video_without_audio.mp4 -i {}/video.aac -map 0:0 -map 1:0 -vcodec \"\n",
|
|
||||||
" \"copy -acodec copy -y {}/video_with_audio.mp4\"\n",
|
|
||||||
" .format(args.output_dir, args.input_audio, args.output_dir),\n",
|
|
||||||
" shell=True, check=True\n",
|
|
||||||
" )"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"The sample video **organutan.mp4** is stored at a publicly shared datastore. We are registering the datastore below. If you want to take a look at the original video, click here. (https://pipelinedata.blob.core.windows.net/sample-videos/orangutan.mp4)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# datastore for input video\n",
|
|
||||||
"account_name = \"pipelinedata\"\n",
|
|
||||||
"video_ds = Datastore.register_azure_blob_container(ws, \"videos\", \"sample-videos\",\n",
|
|
||||||
" account_name=account_name, overwrite=True)\n",
|
|
||||||
"\n",
|
|
||||||
"# datastore for models\n",
|
|
||||||
"models_ds = Datastore.register_azure_blob_container(ws, \"models\", \"styletransfer\", \n",
|
|
||||||
" account_name=\"pipelinedata\", \n",
|
|
||||||
" overwrite=True)\n",
|
|
||||||
" \n",
|
|
||||||
"# downloaded models from https://pytorch.org/tutorials/advanced/neural_style_tutorial.html are kept here\n",
|
|
||||||
"models_dir = DataReference(data_reference_name=\"models\", datastore=models_ds, \n",
|
|
||||||
" path_on_datastore=\"saved_models\", mode=\"download\")\n",
|
|
||||||
"\n",
|
|
||||||
"# the default blob store attached to a workspace\n",
|
|
||||||
"default_datastore = ws.get_default_datastore()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Sample video"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"video_name=os.getenv(\"STYLE_TRANSFER_VIDEO_NAME\", \"orangutan.mp4\") \n",
|
|
||||||
"orangutan_video = DataReference(datastore=video_ds,\n",
|
|
||||||
" data_reference_name=\"video\",\n",
|
|
||||||
" path_on_datastore=video_name, mode=\"download\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"cd = CondaDependencies()\n",
|
|
||||||
"\n",
|
|
||||||
"cd.add_channel(\"conda-forge\")\n",
|
|
||||||
"cd.add_conda_package(\"ffmpeg\")\n",
|
|
||||||
"\n",
|
|
||||||
"cd.add_channel(\"pytorch\")\n",
|
|
||||||
"cd.add_conda_package(\"pytorch\")\n",
|
|
||||||
"cd.add_conda_package(\"torchvision\")\n",
|
|
||||||
"\n",
|
|
||||||
"# Runconfig\n",
|
|
||||||
"amlcompute_run_config = RunConfiguration(conda_dependencies=cd)\n",
|
|
||||||
"amlcompute_run_config.environment.docker.enabled = True\n",
|
|
||||||
"amlcompute_run_config.environment.docker.base_image = \"pytorch/pytorch\"\n",
|
|
||||||
"amlcompute_run_config.environment.spark.precache_packages = False"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"ffmpeg_audio = PipelineData(name=\"ffmpeg_audio\", datastore=default_datastore)\n",
|
|
||||||
"ffmpeg_images = PipelineData(name=\"ffmpeg_images\", datastore=default_datastore)\n",
|
|
||||||
"processed_images = PipelineData(name=\"processed_images\", datastore=default_datastore)\n",
|
|
||||||
"output_video = PipelineData(name=\"output_video\", datastore=default_datastore)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Define tweakable parameters to pipeline\n",
|
|
||||||
"These parameters can be changed when the pipeline is published and rerun from a REST call"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.pipeline.core.graph import PipelineParameter\n",
|
|
||||||
"# create a parameter for style (one of \"candy\", \"mosaic\", \"rain_princess\", \"udnie\") to transfer the images to\n",
|
|
||||||
"style_param = PipelineParameter(name=\"style\", default_value=\"mosaic\")\n",
|
|
||||||
"# create a parameter for the number of nodes to use in step no. 2 (style transfer)\n",
|
|
||||||
"nodecount_param = PipelineParameter(name=\"nodecount\", default_value=1)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"split_video_step = PythonScriptStep(\n",
|
|
||||||
" name=\"split video\",\n",
|
|
||||||
" script_name=\"process_video.py\",\n",
|
|
||||||
" arguments=[\"--input_video\", orangutan_video,\n",
|
|
||||||
" \"--output_audio\", ffmpeg_audio,\n",
|
|
||||||
" \"--output_images\", ffmpeg_images,\n",
|
|
||||||
" ],\n",
|
|
||||||
" compute_target=cpu_cluster,\n",
|
|
||||||
" inputs=[orangutan_video],\n",
|
|
||||||
" outputs=[ffmpeg_images, ffmpeg_audio],\n",
|
|
||||||
" runconfig=amlcompute_run_config,\n",
|
|
||||||
" source_directory=scripts_folder\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"# create a MPI step for distributing style transfer step across multiple nodes in AmlCompute \n",
|
|
||||||
"# using 'nodecount_param' PipelineParameter\n",
|
|
||||||
"distributed_style_transfer_step = MpiStep(\n",
|
|
||||||
" name=\"mpi style transfer\",\n",
|
|
||||||
" script_name=\"neural_style_mpi.py\",\n",
|
|
||||||
" arguments=[\"--content-dir\", ffmpeg_images,\n",
|
|
||||||
" \"--output-dir\", processed_images,\n",
|
|
||||||
" \"--model-dir\", models_dir,\n",
|
|
||||||
" \"--style\", style_param,\n",
|
|
||||||
" \"--cuda\", 1\n",
|
|
||||||
" ],\n",
|
|
||||||
" compute_target=gpu_cluster,\n",
|
|
||||||
" node_count=nodecount_param, \n",
|
|
||||||
" process_count_per_node=1,\n",
|
|
||||||
" inputs=[models_dir, ffmpeg_images],\n",
|
|
||||||
" outputs=[processed_images],\n",
|
|
||||||
" pip_packages=[\"mpi4py\", \"torch\", \"torchvision\"],\n",
|
|
||||||
" use_gpu=True,\n",
|
|
||||||
" source_directory=scripts_folder\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"stitch_video_step = PythonScriptStep(\n",
|
|
||||||
" name=\"stitch\",\n",
|
|
||||||
" script_name=\"stitch_video.py\",\n",
|
|
||||||
" arguments=[\"--images_dir\", processed_images, \n",
|
|
||||||
" \"--input_audio\", ffmpeg_audio, \n",
|
|
||||||
" \"--output_dir\", output_video],\n",
|
|
||||||
" compute_target=cpu_cluster,\n",
|
|
||||||
" inputs=[processed_images, ffmpeg_audio],\n",
|
|
||||||
" outputs=[output_video],\n",
|
|
||||||
" runconfig=amlcompute_run_config,\n",
|
|
||||||
" source_directory=scripts_folder\n",
|
|
||||||
")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Run the pipeline"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"pipeline = Pipeline(workspace=ws, steps=[stitch_video_step])\n",
|
|
||||||
"# submit the pipeline and provide values for the PipelineParameters used in the pipeline\n",
|
|
||||||
"pipeline_run = Experiment(ws, 'style_transfer').submit(pipeline, pipeline_parameters={\"style\": \"mosaic\", \"nodecount\": 3})"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Monitor using widget"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.widgets import RunDetails\n",
|
|
||||||
"RunDetails(pipeline_run).show()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Downloads the video in `output_video` folder"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Download output video"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"def download_video(run, target_dir=None):\n",
|
|
||||||
" stitch_run = run.find_step_run(\"stitch\")[0]\n",
|
|
||||||
" port_data = stitch_run.get_output_data(\"output_video\")\n",
|
|
||||||
" port_data.download(target_dir, show_progress=True)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"pipeline_run.wait_for_completion()\n",
|
|
||||||
"download_video(pipeline_run, \"output_video_mosaic\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Publish pipeline"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"published_pipeline = pipeline_run.publish_pipeline(\n",
|
|
||||||
" name=\"batch score style transfer\", description=\"style transfer\", version=\"1.0\")\n",
|
|
||||||
"\n",
|
|
||||||
"published_pipeline"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Get published pipeline\n",
|
|
||||||
"\n",
|
|
||||||
"You can get the published pipeline using **pipeline id**.\n",
|
|
||||||
"\n",
|
|
||||||
"To get all the published pipelines for a given workspace(ws): \n",
|
|
||||||
"```css\n",
|
|
||||||
"all_pub_pipelines = PublishedPipeline.get_all(ws)\n",
|
|
||||||
"```"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.pipeline.core import PublishedPipeline\n",
|
|
||||||
"\n",
|
|
||||||
"pipeline_id = published_pipeline.id # use your published pipeline id\n",
|
|
||||||
"published_pipeline = PublishedPipeline.get(ws, pipeline_id)\n",
|
|
||||||
"\n",
|
|
||||||
"published_pipeline"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Re-run pipeline through REST calls for other styles"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Get AAD token\n",
|
|
||||||
"[This notebook](https://aka.ms/pl-restep-auth) shows how to authenticate to AML workspace."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
|
|
||||||
"import requests\n",
|
|
||||||
"\n",
|
|
||||||
"auth = InteractiveLoginAuthentication()\n",
|
|
||||||
"aad_token = auth.get_authentication_header()\n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Get endpoint URL"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"rest_endpoint = published_pipeline.endpoint"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Send request and monitor"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Run the pipeline using PipelineParameter values style='candy' and nodecount=2"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = requests.post(rest_endpoint, \n",
|
|
||||||
" headers=aad_token,\n",
|
|
||||||
" json={\"ExperimentName\": \"style_transfer\",\n",
|
|
||||||
" \"ParameterAssignments\": {\"style\": \"candy\", \"nodecount\": 2}})"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"try:\n",
|
|
||||||
" response.raise_for_status()\n",
|
|
||||||
"except Exception: \n",
|
|
||||||
" raise Exception('Received bad response from the endpoint: {}\\n'\n",
|
|
||||||
" 'Response Code: {}\\n'\n",
|
|
||||||
" 'Headers: {}\\n'\n",
|
|
||||||
" 'Content: {}'.format(rest_endpoint, response.status_code, response.headers, response.content))\n",
|
|
||||||
"\n",
|
|
||||||
"run_id = response.json().get('Id')\n",
|
|
||||||
"print('Submitted pipeline run: ', run_id)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.pipeline.core.run import PipelineRun\n",
|
|
||||||
"published_pipeline_run_candy = PipelineRun(ws.experiments[\"style_transfer\"], run_id)\n",
|
|
||||||
"RunDetails(published_pipeline_run_candy).show()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Run the pipeline using PipelineParameter values style='rain_princess' and nodecount=3"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = requests.post(rest_endpoint, \n",
|
|
||||||
" headers=aad_token,\n",
|
|
||||||
" json={\"ExperimentName\": \"style_transfer\",\n",
|
|
||||||
" \"ParameterAssignments\": {\"style\": \"rain_princess\", \"nodecount\": 3}})"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"try:\n",
|
|
||||||
" response.raise_for_status()\n",
|
|
||||||
"except Exception: \n",
|
|
||||||
" raise Exception('Received bad response from the endpoint: {}\\n'\n",
|
|
||||||
" 'Response Code: {}\\n'\n",
|
|
||||||
" 'Headers: {}\\n'\n",
|
|
||||||
" 'Content: {}'.format(rest_endpoint, response.status_code, response.headers, response.content))\n",
|
|
||||||
"\n",
|
|
||||||
"run_id = response.json().get('Id')\n",
|
|
||||||
"print('Submitted pipeline run: ', run_id)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"published_pipeline_run_rain = PipelineRun(ws.experiments[\"style_transfer\"], run_id)\n",
|
|
||||||
"RunDetails(published_pipeline_run_rain).show()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Run the pipeline using PipelineParameter values style='udnie' and nodecount=4"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"response = requests.post(rest_endpoint, \n",
|
|
||||||
" headers=aad_token,\n",
|
|
||||||
" json={\"ExperimentName\": \"style_transfer\",\n",
|
|
||||||
" \"ParameterAssignments\": {\"style\": \"udnie\", \"nodecount\": 3}})\n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"try:\n",
|
|
||||||
" response.raise_for_status()\n",
|
|
||||||
"except Exception: \n",
|
|
||||||
" raise Exception('Received bad response from the endpoint: {}\\n'\n",
|
|
||||||
" 'Response Code: {}\\n'\n",
|
|
||||||
" 'Headers: {}\\n'\n",
|
|
||||||
" 'Content: {}'.format(rest_endpoint, response.status_code, response.headers, response.content))\n",
|
|
||||||
"\n",
|
|
||||||
"run_id = response.json().get('Id')\n",
|
|
||||||
"print('Submitted pipeline run: ', run_id)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"published_pipeline_run_udnie = PipelineRun(ws.experiments[\"style_transfer\"], run_id)\n",
|
|
||||||
"RunDetails(published_pipeline_run_udnie).show()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Download output from re-run"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"published_pipeline_run_candy.wait_for_completion()\n",
|
|
||||||
"published_pipeline_run_rain.wait_for_completion()\n",
|
|
||||||
"published_pipeline_run_udnie.wait_for_completion()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"download_video(published_pipeline_run_candy, target_dir=\"output_video_candy\")\n",
|
|
||||||
"download_video(published_pipeline_run_rain, target_dir=\"output_video_rain_princess\")\n",
|
|
||||||
"download_video(published_pipeline_run_udnie, target_dir=\"output_video_udnie\")"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"authors": [
|
|
||||||
{
|
|
||||||
"name": "balapv mabables"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "Python 3.6",
|
|
||||||
"language": "python",
|
|
||||||
"name": "python36"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.6.7"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 2
|
|
||||||
}
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
name: pipeline-style-transfer-mpi
|
|
||||||
dependencies:
|
|
||||||
- pip:
|
|
||||||
- azureml-sdk
|
|
||||||
- azureml-pipeline-steps
|
|
||||||
- azureml-widgets
|
|
||||||
- requests
|
|
||||||
@@ -121,6 +121,33 @@
|
|||||||
" auth=interactive_auth)"
|
" auth=interactive_auth)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Despite having access to the workspace, you may sometimes see the following error when retrieving it:\n",
|
||||||
|
"\n",
|
||||||
|
"```\n",
|
||||||
|
"You are currently logged-in to xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx tenant. You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription, please check if it is in this tenant.\n",
|
||||||
|
"```\n",
|
||||||
|
"\n",
|
||||||
|
"This error sometimes occurs when you are trying to access a subscription to which you were recently added. In this case, you need to force authentication again to avoid using a cached authentication token that has not picked up the new permissions. You can do so by setting `force=true` on the `InteractiveLoginAuthentication()` object's constructor as follows:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"forced_interactive_auth = InteractiveLoginAuthentication(tenant_id=\"my-tenant-id\", force=True)\n",
|
||||||
|
"\n",
|
||||||
|
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
|
||||||
|
" resource_group=\"my-ml-rg\",\n",
|
||||||
|
" workspace_name=\"my-ml-workspace\",\n",
|
||||||
|
" auth=forced_interactive_auth)"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -408,7 +435,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.core import Experiment, Run\n",
|
"from azureml.core import Experiment\n",
|
||||||
"from azureml.core.script_run_config import ScriptRunConfig\n",
|
"from azureml.core.script_run_config import ScriptRunConfig\n",
|
||||||
"\n",
|
"\n",
|
||||||
"exp = Experiment(workspace = ws, name=\"try-secret\")\n",
|
"exp = Experiment(workspace = ws, name=\"try-secret\")\n",
|
||||||
@@ -424,13 +451,6 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"Furthermore, you can set and get multiple secrets using set_secrets and get_secrets methods."
|
"Furthermore, you can set and get multiple secrets using set_secrets and get_secrets methods."
|
||||||
]
|
]
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": []
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
|||||||
@@ -136,7 +136,7 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" compute_target.wait_for_completion(show_output=True)\n",
|
"compute_target.wait_for_completion(show_output=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -606,14 +606,32 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** `print(service.get_logs())`"
|
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"print(service.get_logs())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"This is the scoring web service endpoint: `print(service.scoring_uri)`"
|
"This is the scoring web service endpoint:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"print(service.scoring_uri)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -742,7 +760,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -308,9 +308,9 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # can poll for a minimum number of nodes and for a specific timeout. \n",
|
"# can poll for a minimum number of nodes and for a specific timeout. \n",
|
||||||
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
"# if no min node count is provided it uses the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -429,7 +429,8 @@
|
|||||||
"dependencies:\n",
|
"dependencies:\n",
|
||||||
"- python=3.6.2\n",
|
"- python=3.6.2\n",
|
||||||
"- pip:\n",
|
"- pip:\n",
|
||||||
" - azureml-defaults==1.13.0\n",
|
" - h5py<=2.10.0\n",
|
||||||
|
" - azureml-defaults\n",
|
||||||
" - tensorflow-gpu==2.0.0\n",
|
" - tensorflow-gpu==2.0.0\n",
|
||||||
" - keras<=2.3.1\n",
|
" - keras<=2.3.1\n",
|
||||||
" - matplotlib"
|
" - matplotlib"
|
||||||
@@ -981,6 +982,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"cd = CondaDependencies.create()\n",
|
"cd = CondaDependencies.create()\n",
|
||||||
"cd.add_tensorflow_conda_package()\n",
|
"cd.add_tensorflow_conda_package()\n",
|
||||||
|
"cd.add_conda_package('h5py<=2.10.0')\n",
|
||||||
"cd.add_conda_package('keras<=2.3.1')\n",
|
"cd.add_conda_package('keras<=2.3.1')\n",
|
||||||
"cd.add_pip_package(\"azureml-defaults\")\n",
|
"cd.add_pip_package(\"azureml-defaults\")\n",
|
||||||
"cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')\n",
|
"cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')\n",
|
||||||
@@ -1031,7 +1033,16 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** `print(service.get_logs())`"
|
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:**"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"print(service.get_logs())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -128,7 +128,7 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" compute_target.wait_for_completion(show_output=True)\n",
|
"compute_target.wait_for_completion(show_output=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -714,7 +714,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -5,5 +5,6 @@ dependencies:
|
|||||||
- azureml-widgets
|
- azureml-widgets
|
||||||
- pillow==5.4.1
|
- pillow==5.4.1
|
||||||
- matplotlib
|
- matplotlib
|
||||||
- https://download.pytorch.org/whl/cpu/torch-1.1.0-cp35-cp35m-win_amd64.whl
|
- numpy==1.19.3
|
||||||
- https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp35-cp35m-win_amd64.whl
|
- https://download.pytorch.org/whl/cpu/torch-1.6.0%2Bcpu-cp36-cp36m-win_amd64.whl
|
||||||
|
- https://download.pytorch.org/whl/cpu/torchvision-0.7.0%2Bcpu-cp36-cp36m-win_amd64.whl
|
||||||
|
|||||||
@@ -153,9 +153,9 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # can poll for a minimum number of nodes and for a specific timeout. \n",
|
"# can poll for a minimum number of nodes and for a specific timeout. \n",
|
||||||
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
"# if no min node count is provided it uses the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -572,7 +572,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -306,9 +306,9 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # can poll for a minimum number of nodes and for a specific timeout. \n",
|
"# can poll for a minimum number of nodes and for a specific timeout. \n",
|
||||||
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
"# if no min node count is provided it uses the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -852,7 +852,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -322,9 +322,9 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # can poll for a minimum number of nodes and for a specific timeout. \n",
|
"# can poll for a minimum number of nodes and for a specific timeout. \n",
|
||||||
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
"# if no min node count is provided it uses the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -1135,7 +1135,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -30,7 +30,6 @@ Using these samples, you will learn how to do the following.
|
|||||||
|
|
||||||
| File/folder | Description |
|
| File/folder | Description |
|
||||||
|-------------------|--------------------------------------------|
|
|-------------------|--------------------------------------------|
|
||||||
| [devenv_setup.ipynb](setup/devenv_setup.ipynb) | Notebook to setup virtual network for using Azure Machine Learning. Needed for the Pong and Minecraft examples. |
|
|
||||||
| [cartpole_ci.ipynb](cartpole-on-compute-instance/cartpole_ci.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Instance |
|
| [cartpole_ci.ipynb](cartpole-on-compute-instance/cartpole_ci.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Instance |
|
||||||
| [cartpole_sc.ipynb](cartpole-on-single-compute/cartpole_sc.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Cluster (single node) |
|
| [cartpole_sc.ipynb](cartpole-on-single-compute/cartpole_sc.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Cluster (single node) |
|
||||||
| [pong_rllib.ipynb](atari-on-distributed-compute/pong_rllib.ipynb) | Notebook for distributed training of Pong agent using RLlib on multiple compute targets |
|
| [pong_rllib.ipynb](atari-on-distributed-compute/pong_rllib.ipynb) | Notebook for distributed training of Pong agent using RLlib on multiple compute targets |
|
||||||
@@ -46,9 +45,7 @@ To make use of these samples, you need the following.
|
|||||||
* An Azure Machine Learning Workspace in the resource group.
|
* An Azure Machine Learning Workspace in the resource group.
|
||||||
* Azure Machine Learning training compute. These samples use the VM sizes `STANDARD_NC6` and `STANDARD_D2_V2`. If these are not available in your region,
|
* Azure Machine Learning training compute. These samples use the VM sizes `STANDARD_NC6` and `STANDARD_D2_V2`. If these are not available in your region,
|
||||||
you can replace them with other sizes.
|
you can replace them with other sizes.
|
||||||
* A virtual network set up in the resource group for samples that use multiple compute targets. The Cartpole examples do not need a virtual network.
|
* A virtual network set up in the resource group for samples that use multiple compute targets. The Cartpole and Multi-agent Particle examples do not need a virtual network. Any network security group defined on the virtual network must allow network traffic on ports used by Azure infrastructure services. Sample instructions are provided in Atari Pong and Minecraft example notebooks.
|
||||||
* The [devenv_setup.ipynb](setup/devenv_setup.ipynb) notebook shows you how to create a virtual network. You can alternatively use an existing virtual network, make sure it's in the same region as workspace is.
|
|
||||||
* Any network security group defined on the virtual network must allow network traffic on ports used by Azure infrastructure services. This is described in more detail in the [devenv_setup.ipynb](setup/devenv_setup.ipynb) notebook.
|
|
||||||
|
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
|
|||||||
@@ -57,7 +57,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Prerequisite\n",
|
"## Prerequisite\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The user should have completed the [Reinforcement Learning in Azure Machine Learning - Setting Up Development Environment](../setup/devenv_setup.ipynb) to setup a virtual network. This virtual network will be used here for head and worker compute targets. It is highly recommended that the user should go through the [Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb) to understand the basics of Reinforcement Learning in Azure Machine Learning and Ray RLlib used in this notebook."
|
"It is highly recommended that the user should go through the [Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb) to understand the basics of Reinforcement Learning in Azure Machine Learning and Ray RLlib used in this notebook."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -69,6 +69,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"* Connecting to a workspace to enable communication between your local machine and remote resources\n",
|
"* Connecting to a workspace to enable communication between your local machine and remote resources\n",
|
||||||
"* Creating an experiment to track all your runs\n",
|
"* Creating an experiment to track all your runs\n",
|
||||||
|
"* Setting up a virtual network\n",
|
||||||
"* Creating remote head and worker compute target on a virtual network to use for training"
|
"* Creating remote head and worker compute target on a virtual network to use for training"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -140,9 +141,13 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Specify the name of your virtual network\n",
|
"### Create Virtual Network\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The resource group you use must contain a virtual network. Specify the name of the virtual network here created in the [Azure Machine Learning Reinforcement Learning Sample - Setting Up Development Environment](../setup/devenv_setup.ipynb)."
|
"If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.\n",
|
||||||
|
"\n",
|
||||||
|
"To do this, you first must install the Azure Networking API.\n",
|
||||||
|
"\n",
|
||||||
|
"`pip install --upgrade azure-mgmt-network`"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -151,14 +156,131 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
|
||||||
|
"#!pip install --upgrade azure-mgmt-network"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azure.mgmt.network import NetworkManagementClient\n",
|
||||||
|
"\n",
|
||||||
"# Virtual network name\n",
|
"# Virtual network name\n",
|
||||||
"vnet_name = 'your_vnet'"
|
"vnet_name =\"rl_pong_vnet\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Default subnet\n",
|
||||||
|
"subnet_name =\"default\"\n",
|
||||||
|
"\n",
|
||||||
|
"# The Azure subscription you are using\n",
|
||||||
|
"subscription_id=ws.subscription_id\n",
|
||||||
|
"\n",
|
||||||
|
"# The resource group for the reinforcement learning cluster\n",
|
||||||
|
"resource_group=ws.resource_group\n",
|
||||||
|
"\n",
|
||||||
|
"# Azure region of the resource group\n",
|
||||||
|
"location=ws.location\n",
|
||||||
|
"\n",
|
||||||
|
"network_client = NetworkManagementClient(ws._auth_object, subscription_id)\n",
|
||||||
|
"\n",
|
||||||
|
"async_vnet_creation = network_client.virtual_networks.create_or_update(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" vnet_name,\n",
|
||||||
|
" {\n",
|
||||||
|
" 'location': location,\n",
|
||||||
|
" 'address_space': {\n",
|
||||||
|
" 'address_prefixes': ['10.0.0.0/16']\n",
|
||||||
|
" }\n",
|
||||||
|
" }\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_vnet_creation.wait()\n",
|
||||||
|
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
|
"### Set up Network Security Group on Virtual Network\n",
|
||||||
|
"\n",
|
||||||
|
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).\n",
|
||||||
|
"\n",
|
||||||
|
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
|
||||||
|
"\n",
|
||||||
|
"You may need to modify the code below to match your scenario."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import azure.mgmt.network.models\n",
|
||||||
|
"\n",
|
||||||
|
"security_group_name = vnet_name + '-' + \"nsg\"\n",
|
||||||
|
"security_rule_name = \"AllowAML\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Create a network security group\n",
|
||||||
|
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
|
||||||
|
" location=location,\n",
|
||||||
|
" security_rules=[\n",
|
||||||
|
" azure.mgmt.network.models.SecurityRule(\n",
|
||||||
|
" name=security_rule_name,\n",
|
||||||
|
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
|
||||||
|
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
|
||||||
|
" destination_address_prefix='*',\n",
|
||||||
|
" destination_port_range='29876-29877',\n",
|
||||||
|
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
|
||||||
|
" priority=400,\n",
|
||||||
|
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
|
||||||
|
" source_address_prefix='BatchNodeManagement',\n",
|
||||||
|
" source_port_range='*'\n",
|
||||||
|
" ),\n",
|
||||||
|
" ],\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_nsg_creation = network_client.network_security_groups.create_or_update(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" security_group_name,\n",
|
||||||
|
" nsg_params,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_nsg_creation.wait() \n",
|
||||||
|
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
|
||||||
|
"\n",
|
||||||
|
"network_security_group = network_client.network_security_groups.get(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" security_group_name,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"# Define a subnet to be created with network security group\n",
|
||||||
|
"subnet = azure.mgmt.network.models.Subnet(\n",
|
||||||
|
" id='default',\n",
|
||||||
|
" address_prefix='10.0.0.0/24',\n",
|
||||||
|
" network_security_group=network_security_group\n",
|
||||||
|
" )\n",
|
||||||
|
" \n",
|
||||||
|
"# Create subnet on virtual network\n",
|
||||||
|
"async_subnet_creation = network_client.subnets.create_or_update(\n",
|
||||||
|
" resource_group_name=resource_group,\n",
|
||||||
|
" virtual_network_name=vnet_name,\n",
|
||||||
|
" subnet_name=subnet_name,\n",
|
||||||
|
" subnet_parameters=subnet\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_subnet_creation.wait()\n",
|
||||||
|
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Review the virtual network security rules\n",
|
||||||
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
|
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -152,6 +152,9 @@
|
|||||||
"from azureml.core.compute import ComputeInstance\n",
|
"from azureml.core.compute import ComputeInstance\n",
|
||||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"import random\n",
|
||||||
|
"import string\n",
|
||||||
|
"\n",
|
||||||
"# Load current compute instance info\n",
|
"# Load current compute instance info\n",
|
||||||
"current_compute_instance = load_nbvm()\n",
|
"current_compute_instance = load_nbvm()\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -160,7 +163,8 @@
|
|||||||
" print(\"Current compute instance:\", current_compute_instance)\n",
|
" print(\"Current compute instance:\", current_compute_instance)\n",
|
||||||
" instance_name = current_compute_instance['instance']\n",
|
" instance_name = current_compute_instance['instance']\n",
|
||||||
"else:\n",
|
"else:\n",
|
||||||
" instance_name = \"cartpole-ci-stdd2v2\"\n",
|
" # Compute instance name needs to be unique across all existing compute instances within an Azure region\n",
|
||||||
|
" instance_name = \"cartpole-ci-\" + \"\".join(random.choice(string.ascii_lowercase) for _ in range(5))\n",
|
||||||
" try:\n",
|
" try:\n",
|
||||||
" instance = ComputeInstance(workspace=ws, name=instance_name)\n",
|
" instance = ComputeInstance(workspace=ws, name=instance_name)\n",
|
||||||
" print('Found existing instance, use it.')\n",
|
" print('Found existing instance, use it.')\n",
|
||||||
@@ -176,7 +180,7 @@
|
|||||||
"compute_target = ws.compute_targets[instance_name]\n",
|
"compute_target = ws.compute_targets[instance_name]\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"Compute target status:\")\n",
|
"print(\"Compute target status:\")\n",
|
||||||
"print(compute_target.get_status().serialize())\n"
|
"print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -77,11 +77,6 @@
|
|||||||
"workspace. For detailed instructions see [Tutorial: Get started creating\n",
|
"workspace. For detailed instructions see [Tutorial: Get started creating\n",
|
||||||
"your first ML experiment.](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup)\n",
|
"your first ML experiment.](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In addition, please follow the instructions in the [Reinforcement Learning in\n",
|
|
||||||
"Azure Machine Learning - Setting Up Development Environment](../setup/devenv_setup.ipynb)\n",
|
|
||||||
"notebook to correctly set up a Virtual Network which is required for completing \n",
|
|
||||||
"this tutorial.\n",
|
|
||||||
"\n",
|
|
||||||
"While this is a standalone notebook, we highly recommend going over the\n",
|
"While this is a standalone notebook, we highly recommend going over the\n",
|
||||||
"introductory notebooks for RL first.\n",
|
"introductory notebooks for RL first.\n",
|
||||||
"- Getting started:\n",
|
"- Getting started:\n",
|
||||||
@@ -96,6 +91,7 @@
|
|||||||
"This includes:\n",
|
"This includes:\n",
|
||||||
"- Connecting to your existing Azure Machine Learning workspace.\n",
|
"- Connecting to your existing Azure Machine Learning workspace.\n",
|
||||||
"- Creating an experiment to track runs.\n",
|
"- Creating an experiment to track runs.\n",
|
||||||
|
"- Setting up a virtual network\n",
|
||||||
"- Creating remote compute targets for [Ray](https://docs.ray.io/en/latest/index.html).\n",
|
"- Creating remote compute targets for [Ray](https://docs.ray.io/en/latest/index.html).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Azure Machine Learning SDK\n",
|
"### Azure Machine Learning SDK\n",
|
||||||
@@ -161,6 +157,164 @@
|
|||||||
"exp = Experiment(workspace=ws, name='minecraft-maze')"
|
"exp = Experiment(workspace=ws, name='minecraft-maze')"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Create Virtual Network\n",
|
||||||
|
"\n",
|
||||||
|
"If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.\n",
|
||||||
|
"\n",
|
||||||
|
"To do this, you first must install the Azure Networking API.\n",
|
||||||
|
"\n",
|
||||||
|
"`pip install --upgrade azure-mgmt-network`"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
|
||||||
|
"#!pip install --upgrade azure-mgmt-network"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azure.mgmt.network import NetworkManagementClient\n",
|
||||||
|
"\n",
|
||||||
|
"# Virtual network name\n",
|
||||||
|
"vnet_name =\"rl_minecraft_vnet\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Default subnet\n",
|
||||||
|
"subnet_name =\"default\"\n",
|
||||||
|
"\n",
|
||||||
|
"# The Azure subscription you are using\n",
|
||||||
|
"subscription_id=ws.subscription_id\n",
|
||||||
|
"\n",
|
||||||
|
"# The resource group for the reinforcement learning cluster\n",
|
||||||
|
"resource_group=ws.resource_group\n",
|
||||||
|
"\n",
|
||||||
|
"# Azure region of the resource group\n",
|
||||||
|
"location=ws.location\n",
|
||||||
|
"\n",
|
||||||
|
"network_client = NetworkManagementClient(ws._auth_object, subscription_id)\n",
|
||||||
|
"\n",
|
||||||
|
"async_vnet_creation = network_client.virtual_networks.create_or_update(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" vnet_name,\n",
|
||||||
|
" {\n",
|
||||||
|
" 'location': location,\n",
|
||||||
|
" 'address_space': {\n",
|
||||||
|
" 'address_prefixes': ['10.0.0.0/16']\n",
|
||||||
|
" }\n",
|
||||||
|
" }\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_vnet_creation.wait()\n",
|
||||||
|
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Set up Network Security Group on Virtual Network\n",
|
||||||
|
"\n",
|
||||||
|
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).\n",
|
||||||
|
"\n",
|
||||||
|
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
|
||||||
|
"\n",
|
||||||
|
"You may need to modify the code below to match your scenario."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import azure.mgmt.network.models\n",
|
||||||
|
"\n",
|
||||||
|
"security_group_name = vnet_name + '-' + \"nsg\"\n",
|
||||||
|
"security_rule_name = \"AllowAML\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Create a network security group\n",
|
||||||
|
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
|
||||||
|
" location=location,\n",
|
||||||
|
" security_rules=[\n",
|
||||||
|
" azure.mgmt.network.models.SecurityRule(\n",
|
||||||
|
" name=security_rule_name,\n",
|
||||||
|
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
|
||||||
|
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
|
||||||
|
" destination_address_prefix='*',\n",
|
||||||
|
" destination_port_range='29876-29877',\n",
|
||||||
|
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
|
||||||
|
" priority=400,\n",
|
||||||
|
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
|
||||||
|
" source_address_prefix='BatchNodeManagement',\n",
|
||||||
|
" source_port_range='*'\n",
|
||||||
|
" ),\n",
|
||||||
|
" ],\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_nsg_creation = network_client.network_security_groups.create_or_update(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" security_group_name,\n",
|
||||||
|
" nsg_params,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_nsg_creation.wait() \n",
|
||||||
|
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
|
||||||
|
"\n",
|
||||||
|
"network_security_group = network_client.network_security_groups.get(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" security_group_name,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"# Define a subnet to be created with network security group\n",
|
||||||
|
"subnet = azure.mgmt.network.models.Subnet(\n",
|
||||||
|
" id='default',\n",
|
||||||
|
" address_prefix='10.0.0.0/24',\n",
|
||||||
|
" network_security_group=network_security_group\n",
|
||||||
|
" )\n",
|
||||||
|
" \n",
|
||||||
|
"# Create subnet on virtual network\n",
|
||||||
|
"async_subnet_creation = network_client.subnets.create_or_update(\n",
|
||||||
|
" resource_group_name=resource_group,\n",
|
||||||
|
" virtual_network_name=vnet_name,\n",
|
||||||
|
" subnet_name=subnet_name,\n",
|
||||||
|
" subnet_parameters=subnet\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_subnet_creation.wait()\n",
|
||||||
|
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Review the virtual network security rules\n",
|
||||||
|
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from files.networkutils import *\n",
|
||||||
|
"\n",
|
||||||
|
"check_vnet_security_rules(ws._auth_object, ws.subscription_id, ws.resource_group, vnet_name, True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -203,12 +357,6 @@
|
|||||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# please enter the name of your Virtual Network (see Prerequisites -> Workspace setup)\n",
|
|
||||||
"vnet_name = 'your_vnet'\n",
|
|
||||||
"\n",
|
|
||||||
"# name of the Virtual Network subnet ('default' the default name)\n",
|
|
||||||
"subnet_name = 'default'\n",
|
|
||||||
"\n",
|
|
||||||
"gpu_cluster_name = 'gpu-cl-nc6-vnet'\n",
|
"gpu_cluster_name = 'gpu-cl-nc6-vnet'\n",
|
||||||
"\n",
|
"\n",
|
||||||
"try:\n",
|
"try:\n",
|
||||||
|
|||||||
@@ -1,262 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
||||||
"\n",
|
|
||||||
"Licensed under the MIT License."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
""
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Reinforcement Learning in Azure Machine Learning - Setting Up Development Environment\n",
|
|
||||||
"\n",
|
|
||||||
"Ray multi-node cluster setup requires all worker nodes to be able to communicate with the head node. This notebook explains you how to setup a virtual network, to be used by the Ray head and worker compute targets, created and used in other notebook examples."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Prerequisite\n",
|
|
||||||
"\n",
|
|
||||||
"The user should have completed the Azure Machine Learning Tutorial: [Get started creating your first ML experiment with the Python SDK](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup). You will need to make sure that you have a valid subscription ID, a resource group, and an Azure Machine Learning workspace."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Azure Machine Learning SDK \n",
|
|
||||||
"Display the Azure Machine Learning SDK version."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import azureml.core\n",
|
|
||||||
"\n",
|
|
||||||
"print(\"Azure Machine Learning SDK Version: \", azureml.core.VERSION)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Get Azure Machine Learning workspace\n",
|
|
||||||
"Get a reference to an existing Azure Machine Learning workspace.\n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core import Workspace\n",
|
|
||||||
"\n",
|
|
||||||
"ws = Workspace.from_config()\n",
|
|
||||||
"print(ws.name, ws.location, ws.resource_group, sep = ' | ')"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Create Virtual Network\n",
|
|
||||||
"\n",
|
|
||||||
"If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.\n",
|
|
||||||
"\n",
|
|
||||||
"To do this, you first must install the Azure Networking API.\n",
|
|
||||||
"\n",
|
|
||||||
"`pip install --upgrade azure-mgmt-network`"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
|
|
||||||
"#!pip install --upgrade azure-mgmt-network"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azure.mgmt.network import NetworkManagementClient\n",
|
|
||||||
"\n",
|
|
||||||
"# Virtual network name\n",
|
|
||||||
"vnet_name =\"your_vnet\"\n",
|
|
||||||
"\n",
|
|
||||||
"# Default subnet\n",
|
|
||||||
"subnet_name =\"default\"\n",
|
|
||||||
"\n",
|
|
||||||
"# The Azure subscription you are using\n",
|
|
||||||
"subscription_id=ws.subscription_id\n",
|
|
||||||
"\n",
|
|
||||||
"# The resource group for the reinforcement learning cluster\n",
|
|
||||||
"resource_group=ws.resource_group\n",
|
|
||||||
"\n",
|
|
||||||
"# Azure region of the resource group\n",
|
|
||||||
"location=ws.location\n",
|
|
||||||
"\n",
|
|
||||||
"network_client = NetworkManagementClient(ws._auth_object, subscription_id)\n",
|
|
||||||
"\n",
|
|
||||||
"async_vnet_creation = network_client.virtual_networks.create_or_update(\n",
|
|
||||||
" resource_group,\n",
|
|
||||||
" vnet_name,\n",
|
|
||||||
" {\n",
|
|
||||||
" 'location': location,\n",
|
|
||||||
" 'address_space': {\n",
|
|
||||||
" 'address_prefixes': ['10.0.0.0/16']\n",
|
|
||||||
" }\n",
|
|
||||||
" }\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"async_vnet_creation.wait()\n",
|
|
||||||
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Set up Network Security Group on Virtual Network\n",
|
|
||||||
"\n",
|
|
||||||
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).\n",
|
|
||||||
"\n",
|
|
||||||
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
|
|
||||||
"\n",
|
|
||||||
"You may need to modify the code below to match your scenario."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import azure.mgmt.network.models\n",
|
|
||||||
"\n",
|
|
||||||
"security_group_name = vnet_name + '-' + \"nsg\"\n",
|
|
||||||
"security_rule_name = \"AllowAML\"\n",
|
|
||||||
"\n",
|
|
||||||
"# Create a network security group\n",
|
|
||||||
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
|
|
||||||
" location=location,\n",
|
|
||||||
" security_rules=[\n",
|
|
||||||
" azure.mgmt.network.models.SecurityRule(\n",
|
|
||||||
" name=security_rule_name,\n",
|
|
||||||
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
|
|
||||||
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
|
|
||||||
" destination_address_prefix='*',\n",
|
|
||||||
" destination_port_range='29876-29877',\n",
|
|
||||||
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
|
|
||||||
" priority=400,\n",
|
|
||||||
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
|
|
||||||
" source_address_prefix='BatchNodeManagement',\n",
|
|
||||||
" source_port_range='*'\n",
|
|
||||||
" ),\n",
|
|
||||||
" ],\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"async_nsg_creation = network_client.network_security_groups.create_or_update(\n",
|
|
||||||
" resource_group,\n",
|
|
||||||
" security_group_name,\n",
|
|
||||||
" nsg_params,\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"async_nsg_creation.wait() \n",
|
|
||||||
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
|
|
||||||
"\n",
|
|
||||||
"network_security_group = network_client.network_security_groups.get(\n",
|
|
||||||
" resource_group,\n",
|
|
||||||
" security_group_name,\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"# Define a subnet to be created with network security group\n",
|
|
||||||
"subnet = azure.mgmt.network.models.Subnet(\n",
|
|
||||||
" id='default',\n",
|
|
||||||
" address_prefix='10.0.0.0/24',\n",
|
|
||||||
" network_security_group=network_security_group\n",
|
|
||||||
" )\n",
|
|
||||||
" \n",
|
|
||||||
"# Create subnet on virtual network\n",
|
|
||||||
"async_subnet_creation = network_client.subnets.create_or_update(\n",
|
|
||||||
" resource_group_name=resource_group,\n",
|
|
||||||
" virtual_network_name=vnet_name,\n",
|
|
||||||
" subnet_name=subnet_name,\n",
|
|
||||||
" subnet_parameters=subnet\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"async_subnet_creation.wait()\n",
|
|
||||||
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Review the virtual network security rules\n",
|
|
||||||
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from files.networkutils import *\n",
|
|
||||||
"\n",
|
|
||||||
"check_vnet_security_rules(ws._auth_object, ws.subscription_id, ws.resource_group, vnet_name, True)"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"authors": [
|
|
||||||
{
|
|
||||||
"name": "vineetg"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "Python 3.6",
|
|
||||||
"language": "python",
|
|
||||||
"name": "python36"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.6.5"
|
|
||||||
},
|
|
||||||
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00e2\u20ac\u00afLicensed under the MIT License.\u00e2\u20ac\u00af "
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 4
|
|
||||||
}
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
name: devenv_setup
|
|
||||||
dependencies:
|
|
||||||
- pip:
|
|
||||||
- azureml-sdk
|
|
||||||
@@ -100,7 +100,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# Check core SDK version number\n",
|
"# Check core SDK version number\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using SDK version 1.17.0, you are currently running version\", azureml.core.VERSION)"
|
"print(\"This notebook was created using SDK version 1.20.0, you are currently running version\", azureml.core.VERSION)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -37,7 +37,6 @@
|
|||||||
"1. [Other ways to create environments](#Other-ways-to-create-environments)\n",
|
"1. [Other ways to create environments](#Other-ways-to-create-environments)\n",
|
||||||
" 1. From existing Conda environment\n",
|
" 1. From existing Conda environment\n",
|
||||||
" 1. From Conda or pip files\n",
|
" 1. From Conda or pip files\n",
|
||||||
"1. [Estimators and environments](#Estimators-and-environments) \n",
|
|
||||||
"1. [Using environments for inferencing](#Using-environments-for-inferencing)\n",
|
"1. [Using environments for inferencing](#Using-environments-for-inferencing)\n",
|
||||||
"1. [Docker settings](#Docker-settings)\n",
|
"1. [Docker settings](#Docker-settings)\n",
|
||||||
"1. [Spark and Azure Databricks settings](#Spark-and-Azure-Databricks-settings)\n",
|
"1. [Spark and Azure Databricks settings](#Spark-and-Azure-Databricks-settings)\n",
|
||||||
@@ -424,11 +423,9 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Next steps\n",
|
"## Next steps\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Learn more about remote runs on different compute targets:\n",
|
"Train with ML frameworks on Azure ML:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"* [Train on ML Compute](../../training/train-on-amlcompute/train-on-amlcompute.ipynb)\n",
|
"* [Train with ML frameworks](../../ml-frameworks)\n",
|
||||||
"\n",
|
|
||||||
"* [Train on remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb)\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"Learn more about registering and deploying a model:\n",
|
"Learn more about registering and deploying a model:\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|||||||
5
index.md
5
index.md
@@ -35,7 +35,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
|
|||||||
| :star:[How to use Pipeline Drafts to create a Published Pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-pipeline-drafts.ipynb) | Demonstrates the use of Pipeline Drafts | Custom | AML Compute | None | Azure ML | None |
|
| :star:[How to use Pipeline Drafts to create a Published Pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-pipeline-drafts.ipynb) | Demonstrates the use of Pipeline Drafts | Custom | AML Compute | None | Azure ML | None |
|
||||||
| :star:[Azure Machine Learning Pipeline with HyperDriveStep](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-parameter-tuning-with-hyperdrive.ipynb) | Demonstrates the use of HyperDriveStep | Custom | AML Compute | None | Azure ML | None |
|
| :star:[Azure Machine Learning Pipeline with HyperDriveStep](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-parameter-tuning-with-hyperdrive.ipynb) | Demonstrates the use of HyperDriveStep | Custom | AML Compute | None | Azure ML | None |
|
||||||
| :star:[How to Publish a Pipeline and Invoke the REST endpoint](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.ipynb) | Demonstrates the use of Published Pipelines | Custom | AML Compute | None | Azure ML | None |
|
| :star:[How to Publish a Pipeline and Invoke the REST endpoint](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.ipynb) | Demonstrates the use of Published Pipelines | Custom | AML Compute | None | Azure ML | None |
|
||||||
| :star:[How to Setup a Schedule for a Published Pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-schedule-for-a-published-pipeline.ipynb) | Demonstrates the use of Schedules for Published Pipelines | Custom | AML Compute | None | Azure ML | None |
|
| :star:[How to Setup a Schedule for a Published Pipeline or Pipeline Endpoint](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-schedule-for-a-published-pipeline.ipynb) | Demonstrates the use of Schedules for Published Pipelines and Pipeline endpoints | Custom | AML Compute | None | Azure ML | None |
|
||||||
| [How to setup a versioned Pipeline Endpoint](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-versioned-pipeline-endpoints.ipynb) | Demonstrates the use of PipelineEndpoint to run a specific version of the Published Pipeline | Custom | AML Compute | None | Azure ML | None |
|
| [How to setup a versioned Pipeline Endpoint](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-versioned-pipeline-endpoints.ipynb) | Demonstrates the use of PipelineEndpoint to run a specific version of the Published Pipeline | Custom | AML Compute | None | Azure ML | None |
|
||||||
| :star:[How to use DataPath as a PipelineParameter](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.ipynb) | Demonstrates the use of DataPath as a PipelineParameter | Custom | AML Compute | None | Azure ML | None |
|
| :star:[How to use DataPath as a PipelineParameter](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.ipynb) | Demonstrates the use of DataPath as a PipelineParameter | Custom | AML Compute | None | Azure ML | None |
|
||||||
| :star:[How to use Dataset as a PipelineParameter](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-dataset-and-pipelineparameter.ipynb) | Demonstrates the use of Dataset as a PipelineParameter | Custom | AML Compute | None | Azure ML | None |
|
| :star:[How to use Dataset as a PipelineParameter](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-dataset-and-pipelineparameter.ipynb) | Demonstrates the use of Dataset as a PipelineParameter | Custom | AML Compute | None | Azure ML | None |
|
||||||
@@ -97,6 +97,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
|
|||||||
## Other Notebooks
|
## Other Notebooks
|
||||||
|Title| Task | Dataset | Training Compute | Deployment Target | ML Framework | Tags |
|
|Title| Task | Dataset | Training Compute | Deployment Target | ML Framework | Tags |
|
||||||
|:----|:-----|:-------:|:----------------:|:-----------------:|:------------:|:------------:|
|
|:----|:-----|:-------:|:----------------:|:-----------------:|:------------:|:------------:|
|
||||||
|
| [DNN Text Featurization](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb) | Text featurization using DNNs for classification | None | AML Compute | None | None | None |
|
||||||
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) | | | | | | |
|
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) | | | | | | |
|
||||||
| [fairlearn-azureml-mitigation](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/fairlearn-azureml-mitigation.ipynb) | | | | | | |
|
| [fairlearn-azureml-mitigation](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/fairlearn-azureml-mitigation.ipynb) | | | | | | |
|
||||||
| [upload-fairness-dashboard](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/upload-fairness-dashboard.ipynb) | | | | | | |
|
| [upload-fairness-dashboard](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/upload-fairness-dashboard.ipynb) | | | | | | |
|
||||||
@@ -121,14 +122,12 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
|
|||||||
| [train-explain-model-on-amlcompute-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-on-amlcompute-and-deploy.ipynb) | | | | | | |
|
| [train-explain-model-on-amlcompute-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-on-amlcompute-and-deploy.ipynb) | | | | | | |
|
||||||
| [training_notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/notebook_runner/training_notebook.ipynb) | | | | | | |
|
| [training_notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/notebook_runner/training_notebook.ipynb) | | | | | | |
|
||||||
| [nyc-taxi-data-regression-model-building](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) | | | | | | |
|
| [nyc-taxi-data-regression-model-building](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) | | | | | | |
|
||||||
| [pipeline-style-transfer-mpi](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/pipeline-style-transfer/pipeline-style-transfer-mpi.ipynb) | | | | | | |
|
|
||||||
| [authentication-in-azureml](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.ipynb) | | | | | | |
|
| [authentication-in-azureml](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.ipynb) | | | | | | |
|
||||||
| [pong_rllib](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb) | | | | | | |
|
| [pong_rllib](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb) | | | | | | |
|
||||||
| [cartpole_ci](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-compute-instance/cartpole_ci.ipynb) | | | | | | |
|
| [cartpole_ci](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-compute-instance/cartpole_ci.ipynb) | | | | | | |
|
||||||
| [cartpole_sc](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb) | | | | | | |
|
| [cartpole_sc](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb) | | | | | | |
|
||||||
| [minecraft](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.ipynb) | | | | | | |
|
| [minecraft](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.ipynb) | | | | | | |
|
||||||
| [particle](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.ipynb) | | | | | | |
|
| [particle](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.ipynb) | | | | | | |
|
||||||
| [devenv_setup](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/setup/devenv_setup.ipynb) | | | | | | |
|
|
||||||
| [Logging APIs](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb) | Logging APIs and analyzing results | None | None | None | None | None |
|
| [Logging APIs](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb) | Logging APIs and analyzing results | None | None | None | None | None |
|
||||||
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master//setup-environment/configuration.ipynb) | | | | | | |
|
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master//setup-environment/configuration.ipynb) | | | | | | |
|
||||||
| [tutorial-1st-experiment-sdk-train](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | | | | | | |
|
| [tutorial-1st-experiment-sdk-train](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | | | | | | |
|
||||||
|
|||||||
@@ -28,7 +28,7 @@ git clone https://github.com/Azure/MachineLearningNotebooks.git
|
|||||||
pip install azureml-sdk[notebooks,tensorboard]
|
pip install azureml-sdk[notebooks,tensorboard]
|
||||||
|
|
||||||
# install model explainability component
|
# install model explainability component
|
||||||
pip install azureml-sdk[explain]
|
pip install azureml-sdk[interpret]
|
||||||
|
|
||||||
# install automated ml components
|
# install automated ml components
|
||||||
pip install azureml-sdk[automl]
|
pip install azureml-sdk[automl]
|
||||||
@@ -86,7 +86,7 @@ If you need additional Azure ML SDK components, you can either modify the Docker
|
|||||||
pip install azureml-sdk[automl]
|
pip install azureml-sdk[automl]
|
||||||
|
|
||||||
# install the core SDK and model explainability component
|
# install the core SDK and model explainability component
|
||||||
pip install azureml-sdk[explain]
|
pip install azureml-sdk[interpret]
|
||||||
|
|
||||||
# install the core SDK and experimental components
|
# install the core SDK and experimental components
|
||||||
pip install azureml-sdk[contrib]
|
pip install azureml-sdk[contrib]
|
||||||
|
|||||||
@@ -102,7 +102,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -306,7 +306,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"|Property| Value in this tutorial |Description|\n",
|
"|Property| Value in this tutorial |Description|\n",
|
||||||
"|----|----|---|\n",
|
"|----|----|---|\n",
|
||||||
"|**iteration_timeout_minutes**|2|Time limit in minutes for each iteration. Reduce this value to decrease total runtime.|\n",
|
"|**iteration_timeout_minutes**|10|Time limit in minutes for each iteration. Increase this value for larger datasets that need more time for each iteration.|\n",
|
||||||
"|**experiment_timeout_hours**|0.3|Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n",
|
"|**experiment_timeout_hours**|0.3|Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n",
|
||||||
"|**enable_early_stopping**|True|Flag to enable early termination if the score is not improving in the short term.|\n",
|
"|**enable_early_stopping**|True|Flag to enable early termination if the score is not improving in the short term.|\n",
|
||||||
"|**primary_metric**| spearman_correlation | Metric that you want to optimize. The best-fit model will be chosen based on this metric.|\n",
|
"|**primary_metric**| spearman_correlation | Metric that you want to optimize. The best-fit model will be chosen based on this metric.|\n",
|
||||||
@@ -324,7 +324,7 @@
|
|||||||
"import logging\n",
|
"import logging\n",
|
||||||
"\n",
|
"\n",
|
||||||
"automl_settings = {\n",
|
"automl_settings = {\n",
|
||||||
" \"iteration_timeout_minutes\": 2,\n",
|
" \"iteration_timeout_minutes\": 10,\n",
|
||||||
" \"experiment_timeout_hours\": 0.3,\n",
|
" \"experiment_timeout_hours\": 0.3,\n",
|
||||||
" \"enable_early_stopping\": True,\n",
|
" \"enable_early_stopping\": True,\n",
|
||||||
" \"primary_metric\": 'spearman_correlation',\n",
|
" \"primary_metric\": 'spearman_correlation',\n",
|
||||||
|
|||||||
Reference in New Issue
Block a user