mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 01:27:06 -05:00
Compare commits
14 Commits
lostmygith
...
azureml-sd
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
74deb14fac | ||
|
|
4ed1d445ae | ||
|
|
b5c15db0b4 | ||
|
|
91d43bade6 | ||
|
|
bd750f5817 | ||
|
|
637bcc5973 | ||
|
|
ba741fb18d | ||
|
|
ac0ad8d487 | ||
|
|
5019ad6c5a | ||
|
|
41a2ebd2b3 | ||
|
|
53e3283d1d | ||
|
|
ba9c4c5465 | ||
|
|
a6c65f00ec | ||
|
|
95072eabc2 |
14
README.md
14
README.md
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
> a community-driven repository of examples using mlflow for tracking can be found at https://github.com/Azure/azureml-examples
|
> a community-driven repository of examples using mlflow for tracking can be found at https://github.com/Azure/azureml-examples
|
||||||
|
|
||||||
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
|
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -20,10 +20,10 @@ This [index](./index.md) should assist in navigating the Azure Machine Learning
|
|||||||
If you want to...
|
If you want to...
|
||||||
|
|
||||||
* ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb).
|
* ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb).
|
||||||
* ...learn about experimentation and tracking run history, first [train within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) and [using logging APIs](./how-to-use-azureml/training/logging-api/logging-api.ipynb).
|
* ...learn about experimentation and tracking run history: [track and monitor experiments](./how-to-use-azureml/track-and-monitor-experiments).
|
||||||
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
|
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
|
||||||
* ...deploy models as a realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
|
* ...deploy models as a realtime scoring service, first learn the basics by [deploying to Azure Container Instance](./how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.ipynb), then learn how to [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
|
||||||
* ...deploy models as a batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring).
|
* ...deploy models as a batch scoring service: [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb) and [use Machine Learning Pipelines to deploy your model](https://aka.ms/pl-batch-scoring).
|
||||||
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb).
|
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb).
|
||||||
|
|
||||||
## Tutorials
|
## Tutorials
|
||||||
@@ -35,13 +35,12 @@ The [Tutorials](./tutorials) folder contains notebooks for the tutorials describ
|
|||||||
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
|
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
|
||||||
|
|
||||||
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
|
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets
|
||||||
- [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
|
- [Training with ML and DL frameworks](./how-to-use-azureml/ml-frameworks) - Examples demonstrating how to build and train machine learning models at scale on Azure ML and perform hyperparameter tuning.
|
||||||
- [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
|
- [Manage Azure ML Service](./how-to-use-azureml/manage-azureml-service) - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.
|
||||||
- [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
|
- [Automated Machine Learning](./how-to-use-azureml/automated-machine-learning) - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
|
||||||
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
|
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
|
||||||
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
|
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
|
||||||
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
|
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
|
||||||
- [Monitor Models](./how-to-use-azureml/monitor-models) - Examples showing how to enable model monitoring services such as DataDrift
|
|
||||||
- [Reinforcement Learning](./how-to-use-azureml/reinforcement-learning) - Examples showing how to train reinforcement learning agents
|
- [Reinforcement Learning](./how-to-use-azureml/reinforcement-learning) - Examples showing how to train reinforcement learning agents
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -60,7 +59,6 @@ Visit this [community repository](https://github.com/microsoft/MLOps/tree/master
|
|||||||
## Projects using Azure Machine Learning
|
## Projects using Azure Machine Learning
|
||||||
|
|
||||||
Visit following repos to see projects contributed by Azure ML users:
|
Visit following repos to see projects contributed by Azure ML users:
|
||||||
- [AMLSamples](https://github.com/Azure/AMLSamples) Number of end-to-end examples, including face recognition, predictive maintenance, customer churn and sentiment analysis.
|
|
||||||
- [Learn about Natural Language Processing best practices using Azure Machine Learning service](https://github.com/microsoft/nlp)
|
- [Learn about Natural Language Processing best practices using Azure Machine Learning service](https://github.com/microsoft/nlp)
|
||||||
- [Pre-Train BERT models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
|
- [Pre-Train BERT models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
|
||||||
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
|
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
|
||||||
|
|||||||
@@ -103,7 +103,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ dependencies:
|
|||||||
- pip<=19.3.1
|
- pip<=19.3.1
|
||||||
- python>=3.5.2,<3.6.8
|
- python>=3.5.2,<3.6.8
|
||||||
- nb_conda
|
- nb_conda
|
||||||
|
- boto3==1.15.18
|
||||||
- matplotlib==2.1.0
|
- matplotlib==2.1.0
|
||||||
- numpy==1.18.5
|
- numpy==1.18.5
|
||||||
- cython
|
- cython
|
||||||
@@ -20,9 +21,8 @@ dependencies:
|
|||||||
|
|
||||||
- pip:
|
- pip:
|
||||||
# Required packages for AzureML execution, history, and data preparation.
|
# Required packages for AzureML execution, history, and data preparation.
|
||||||
- azureml-widgets
|
- azureml-widgets~=1.18.0
|
||||||
- pytorch-transformers==1.0.0
|
- pytorch-transformers==1.0.0
|
||||||
- spacy==2.1.8
|
- spacy==2.1.8
|
||||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||||
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_win32_requirements.txt [--no-deps]
|
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.18.0/validated_win32_requirements.txt [--no-deps]
|
||||||
|
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ dependencies:
|
|||||||
- pip<=19.3.1
|
- pip<=19.3.1
|
||||||
- python>=3.5.2,<3.6.8
|
- python>=3.5.2,<3.6.8
|
||||||
- nb_conda
|
- nb_conda
|
||||||
|
- boto3==1.15.18
|
||||||
- matplotlib==2.1.0
|
- matplotlib==2.1.0
|
||||||
- numpy==1.18.5
|
- numpy==1.18.5
|
||||||
- cython
|
- cython
|
||||||
@@ -20,9 +21,9 @@ dependencies:
|
|||||||
|
|
||||||
- pip:
|
- pip:
|
||||||
# Required packages for AzureML execution, history, and data preparation.
|
# Required packages for AzureML execution, history, and data preparation.
|
||||||
- azureml-widgets
|
- azureml-widgets~=1.18.0
|
||||||
- pytorch-transformers==1.0.0
|
- pytorch-transformers==1.0.0
|
||||||
- spacy==2.1.8
|
- spacy==2.1.8
|
||||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||||
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_linux_requirements.txt [--no-deps]
|
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.18.0/validated_linux_requirements.txt [--no-deps]
|
||||||
|
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ dependencies:
|
|||||||
- nomkl
|
- nomkl
|
||||||
- python>=3.5.2,<3.6.8
|
- python>=3.5.2,<3.6.8
|
||||||
- nb_conda
|
- nb_conda
|
||||||
|
- boto3==1.15.18
|
||||||
- matplotlib==2.1.0
|
- matplotlib==2.1.0
|
||||||
- numpy==1.18.5
|
- numpy==1.18.5
|
||||||
- cython
|
- cython
|
||||||
@@ -21,8 +22,8 @@ dependencies:
|
|||||||
|
|
||||||
- pip:
|
- pip:
|
||||||
# Required packages for AzureML execution, history, and data preparation.
|
# Required packages for AzureML execution, history, and data preparation.
|
||||||
- azureml-widgets
|
- azureml-widgets~=1.18.0
|
||||||
- pytorch-transformers==1.0.0
|
- pytorch-transformers==1.0.0
|
||||||
- spacy==2.1.8
|
- spacy==2.1.8
|
||||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||||
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_darwin_requirements.txt [--no-deps]
|
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.18.0/validated_darwin_requirements.txt [--no-deps]
|
||||||
|
|||||||
@@ -105,7 +105,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -899,7 +899,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "anumamah"
|
"name": "ratanase"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -93,7 +93,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -450,7 +450,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "tzvikei"
|
"name": "ratanase"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -42,9 +42,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade).\n",
|
|
||||||
"\n",
|
|
||||||
"Notebook synopsis:\n",
|
"Notebook synopsis:\n",
|
||||||
|
"\n",
|
||||||
"1. Creating an Experiment in an existing Workspace\n",
|
"1. Creating an Experiment in an existing Workspace\n",
|
||||||
"2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification\n",
|
"2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification\n",
|
||||||
"3. Registering the best model for future use\n",
|
"3. Registering the best model for future use\n",
|
||||||
@@ -97,7 +96,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -272,8 +271,6 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade).\n",
|
|
||||||
"\n",
|
|
||||||
"This notebook uses the blocked_models parameter to exclude some models that can take a longer time to train on some text datasets. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
|
"This notebook uses the blocked_models parameter to exclude some models that can take a longer time to train on some text datasets. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -32,13 +32,6 @@
|
|||||||
"8. [Test Retraining](#Test-Retraining)"
|
"8. [Test Retraining](#Test-Retraining)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -88,7 +81,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -550,7 +543,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "anshirga"
|
"name": "vivijay"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
|
|||||||
@@ -92,7 +92,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -138,7 +138,8 @@
|
|||||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for your CPU cluster\n",
|
"# Choose a name for your CPU cluster\n",
|
||||||
"cpu_cluster_name = \"reg-cluster\"\n",
|
"# Try to ensure that the cluster name is unique across the notebooks\n",
|
||||||
|
"cpu_cluster_name = \"reg-model-proxy\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Verify that cluster does not exist already\n",
|
"# Verify that cluster does not exist already\n",
|
||||||
"try:\n",
|
"try:\n",
|
||||||
@@ -451,7 +452,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "rakellam"
|
"name": "sekrupa"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"categories": [
|
"categories": [
|
||||||
|
|||||||
@@ -54,9 +54,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)\n",
|
|
||||||
"\n",
|
|
||||||
"Notebook synopsis:\n",
|
"Notebook synopsis:\n",
|
||||||
|
"\n",
|
||||||
"1. Creating an Experiment in an existing Workspace\n",
|
"1. Creating an Experiment in an existing Workspace\n",
|
||||||
"2. Configuration and remote run of AutoML for a time-series model exploring Regression learners, Arima, Prophet and DNNs\n",
|
"2. Configuration and remote run of AutoML for a time-series model exploring Regression learners, Arima, Prophet and DNNs\n",
|
||||||
"4. Evaluating the fitted model using a rolling test "
|
"4. Evaluating the fitted model using a rolling test "
|
||||||
@@ -114,7 +113,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -350,9 +349,7 @@
|
|||||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
||||||
"|**label_column_name**|The name of the label column.|\n",
|
"|**label_column_name**|The name of the label column.|\n",
|
||||||
"|**enable_dnn**|Enable Forecasting DNNs|\n",
|
"|**enable_dnn**|Enable Forecasting DNNs|\n"
|
||||||
"\n",
|
|
||||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -650,7 +647,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "omkarm"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"hide_code_all_hidden": false,
|
"hide_code_all_hidden": false,
|
||||||
|
|||||||
@@ -87,7 +87,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -594,7 +594,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "erwright"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -97,7 +97,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -703,7 +703,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "erwright"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"categories": [
|
"categories": [
|
||||||
|
|||||||
@@ -24,7 +24,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Introduction\n",
|
"## Introduction\n",
|
||||||
"This notebook demonstrates the full interface to the `forecast()` function. \n",
|
"This notebook demonstrates the full interface of the `forecast()` function. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. \n",
|
"The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. \n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -94,7 +94,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -809,7 +809,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "erwright"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -82,7 +82,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -325,12 +325,11 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Customization\n",
|
"## Customization\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include,\n",
|
"The featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:\n",
|
||||||
|
"\n",
|
||||||
"1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.\n",
|
"1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.\n",
|
||||||
"2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.\n",
|
"2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.\n",
|
||||||
"3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data.\n",
|
"3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data."
|
||||||
"\n",
|
|
||||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -383,7 +382,7 @@
|
|||||||
"The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.\n",
|
"The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We note here that AutoML can sweep over two types of time-series models:\n",
|
"We note here that AutoML can sweep over two types of time-series models:\n",
|
||||||
"* Models that are trained for each series such as ARIMA and Facebook's Prophet. Note that these models are only available for [Enterprise Edition Workspaces](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace#upgrade).\n",
|
"* Models that are trained for each series such as ARIMA and Facebook's Prophet.\n",
|
||||||
"* Models trained across multiple time-series using a regression approach.\n",
|
"* Models trained across multiple time-series using a regression approach.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. \n",
|
"In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. \n",
|
||||||
@@ -764,7 +763,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "erwright"
|
"name": "jialiu"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -96,7 +96,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -359,7 +359,7 @@
|
|||||||
"Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data.\n",
|
"Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Run the explanation\n",
|
"### Run the explanation\n",
|
||||||
"#### Download engineered feature importance from artifact store\n",
|
"#### Download the engineered feature importance from artifact store\n",
|
||||||
"You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
|
"You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -375,6 +375,25 @@
|
|||||||
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
|
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Download the raw feature importance from artifact store\n",
|
||||||
|
"You can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"raw_explanations = client.download_model_explanation(raw=True)\n",
|
||||||
|
"print(raw_explanations.get_feature_importance_dict())\n",
|
||||||
|
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -474,6 +493,29 @@
|
|||||||
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
|
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Use Mimic Explainer for computing and visualizing raw feature importance\n",
|
||||||
|
"The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Compute the raw explanations\n",
|
||||||
|
"raw_explanations = explainer.explain(['local', 'global'], get_raw=True,\n",
|
||||||
|
" raw_feature_names=automl_explainer_setup_obj.raw_feature_names,\n",
|
||||||
|
" eval_dataset=automl_explainer_setup_obj.X_test_transform,\n",
|
||||||
|
" raw_eval_dataset=automl_explainer_setup_obj.X_test_raw)\n",
|
||||||
|
"print(raw_explanations.get_feature_importance_dict())\n",
|
||||||
|
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -589,10 +631,13 @@
|
|||||||
" automl_explainer_setup_obj = automl_setup_model_explanations(automl_model,\n",
|
" automl_explainer_setup_obj = automl_setup_model_explanations(automl_model,\n",
|
||||||
" X_test=data, task='classification')\n",
|
" X_test=data, task='classification')\n",
|
||||||
" # Retrieve model explanations for engineered explanations\n",
|
" # Retrieve model explanations for engineered explanations\n",
|
||||||
" engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) \n",
|
" engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform)\n",
|
||||||
|
" # Retrieve model explanations for raw explanations\n",
|
||||||
|
" raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True)\n",
|
||||||
" # You can return any data type as long as it is JSON-serializable\n",
|
" # You can return any data type as long as it is JSON-serializable\n",
|
||||||
" return {'predictions': predictions.tolist(),\n",
|
" return {'predictions': predictions.tolist(),\n",
|
||||||
" 'engineered_local_importance_values': engineered_local_importance_values}\n"
|
" 'engineered_local_importance_values': engineered_local_importance_values,\n",
|
||||||
|
" 'raw_local_importance_values': raw_local_importance_values}\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -725,7 +770,9 @@
|
|||||||
"# Print the predicted value\n",
|
"# Print the predicted value\n",
|
||||||
"print('predictions:\\n{}\\n'.format(output['predictions']))\n",
|
"print('predictions:\\n{}\\n'.format(output['predictions']))\n",
|
||||||
"# Print the engineered feature importances for the predicted value\n",
|
"# Print the engineered feature importances for the predicted value\n",
|
||||||
"print('engineered_local_importance_values:\\n{}\\n'.format(output['engineered_local_importance_values']))"
|
"print('engineered_local_importance_values:\\n{}\\n'.format(output['engineered_local_importance_values']))\n",
|
||||||
|
"# Print the raw feature importances for the predicted value\n",
|
||||||
|
"print('raw_local_importance_values:\\n{}\\n'.format(output['raw_local_importance_values']))\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -773,7 +820,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "anumamah"
|
"name": "ratanase"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -42,8 +42,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
|
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"An Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade) \n",
|
|
||||||
"\n",
|
|
||||||
"In this notebook you will learn how to:\n",
|
"In this notebook you will learn how to:\n",
|
||||||
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
||||||
"2. Instantiating AutoMLConfig with FeaturizationConfig for customization\n",
|
"2. Instantiating AutoMLConfig with FeaturizationConfig for customization\n",
|
||||||
@@ -98,7 +96,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -223,9 +221,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Customization\n",
|
"## Customization\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade). \n",
|
|
||||||
"\n",
|
|
||||||
"Supported customization includes:\n",
|
"Supported customization includes:\n",
|
||||||
|
"\n",
|
||||||
"1. Column purpose update: Override feature type for the specified column.\n",
|
"1. Column purpose update: Override feature type for the specified column.\n",
|
||||||
"2. Transformer parameter update: Update parameters for the specified transformer. Currently supports Imputer and HashOneHotEncoder.\n",
|
"2. Transformer parameter update: Update parameters for the specified transformer. Currently supports Imputer and HashOneHotEncoder.\n",
|
||||||
"3. Drop columns: Columns to drop from being featurized.\n",
|
"3. Drop columns: Columns to drop from being featurized.\n",
|
||||||
@@ -447,7 +444,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Explanations\n",
|
"## Explanations\n",
|
||||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade). \n",
|
|
||||||
"This section will walk you through the workflow to compute model explanations for an AutoML model on your remote compute.\n",
|
"This section will walk you through the workflow to compute model explanations for an AutoML model on your remote compute.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Retrieve any AutoML Model for explanations\n",
|
"### Retrieve any AutoML Model for explanations\n",
|
||||||
@@ -655,7 +651,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Operationailze\n",
|
"## Operationalize\n",
|
||||||
"In this section we will show how you can operationalize an AutoML model and the explainer which was used to compute the explanations in the previous section.\n",
|
"In this section we will show how you can operationalize an AutoML model and the explainer which was used to compute the explanations in the previous section.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Register the AutoML model and the scoring explainer\n",
|
"### Register the AutoML model and the scoring explainer\n",
|
||||||
@@ -905,7 +901,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "anumamah"
|
"name": "anshirga"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"categories": [
|
"categories": [
|
||||||
|
|||||||
@@ -92,7 +92,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -462,7 +462,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "rakellam"
|
"name": "ratanase"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"categories": [
|
"categories": [
|
||||||
|
|||||||
@@ -232,7 +232,7 @@
|
|||||||
" max_nodes=4)\n",
|
" max_nodes=4)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"Azure Machine Learning Compute attached\")\n",
|
"print(\"Azure Machine Learning Compute attached\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -249,7 +249,7 @@
|
|||||||
" max_nodes=4)\n",
|
" max_nodes=4)\n",
|
||||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
|
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
|
||||||
" \n",
|
" \n",
|
||||||
" cpu_cluster.wait_for_completion(show_output=True)"
|
"cpu_cluster.wait_for_completion(show_output=True)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -460,8 +460,8 @@
|
|||||||
" name=\"Merge Taxi Data\",\n",
|
" name=\"Merge Taxi Data\",\n",
|
||||||
" script_name=\"merge.py\", \n",
|
" script_name=\"merge.py\", \n",
|
||||||
" arguments=[\"--output_merge\", merged_data],\n",
|
" arguments=[\"--output_merge\", merged_data],\n",
|
||||||
" inputs=[cleansed_green_data.parse_parquet_files(file_extension=None),\n",
|
" inputs=[cleansed_green_data.parse_parquet_files(),\n",
|
||||||
" cleansed_yellow_data.parse_parquet_files(file_extension=None)],\n",
|
" cleansed_yellow_data.parse_parquet_files()],\n",
|
||||||
" outputs=[merged_data],\n",
|
" outputs=[merged_data],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig=aml_run_config,\n",
|
" runconfig=aml_run_config,\n",
|
||||||
@@ -497,7 +497,7 @@
|
|||||||
" name=\"Filter Taxi Data\",\n",
|
" name=\"Filter Taxi Data\",\n",
|
||||||
" script_name=\"filter.py\", \n",
|
" script_name=\"filter.py\", \n",
|
||||||
" arguments=[\"--output_filter\", filtered_data],\n",
|
" arguments=[\"--output_filter\", filtered_data],\n",
|
||||||
" inputs=[merged_data.parse_parquet_files(file_extension=None)],\n",
|
" inputs=[merged_data.parse_parquet_files()],\n",
|
||||||
" outputs=[filtered_data],\n",
|
" outputs=[filtered_data],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig = aml_run_config,\n",
|
" runconfig = aml_run_config,\n",
|
||||||
@@ -533,7 +533,7 @@
|
|||||||
" name=\"Normalize Taxi Data\",\n",
|
" name=\"Normalize Taxi Data\",\n",
|
||||||
" script_name=\"normalize.py\", \n",
|
" script_name=\"normalize.py\", \n",
|
||||||
" arguments=[\"--output_normalize\", normalized_data],\n",
|
" arguments=[\"--output_normalize\", normalized_data],\n",
|
||||||
" inputs=[filtered_data.parse_parquet_files(file_extension=None)],\n",
|
" inputs=[filtered_data.parse_parquet_files()],\n",
|
||||||
" outputs=[normalized_data],\n",
|
" outputs=[normalized_data],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig = aml_run_config,\n",
|
" runconfig = aml_run_config,\n",
|
||||||
@@ -574,7 +574,7 @@
|
|||||||
" name=\"Transform Taxi Data\",\n",
|
" name=\"Transform Taxi Data\",\n",
|
||||||
" script_name=\"transform.py\", \n",
|
" script_name=\"transform.py\", \n",
|
||||||
" arguments=[\"--output_transform\", transformed_data],\n",
|
" arguments=[\"--output_transform\", transformed_data],\n",
|
||||||
" inputs=[normalized_data.parse_parquet_files(file_extension=None)],\n",
|
" inputs=[normalized_data.parse_parquet_files()],\n",
|
||||||
" outputs=[transformed_data],\n",
|
" outputs=[transformed_data],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig = aml_run_config,\n",
|
" runconfig = aml_run_config,\n",
|
||||||
@@ -614,7 +614,7 @@
|
|||||||
" script_name=\"train_test_split.py\", \n",
|
" script_name=\"train_test_split.py\", \n",
|
||||||
" arguments=[\"--output_split_train\", output_split_train,\n",
|
" arguments=[\"--output_split_train\", output_split_train,\n",
|
||||||
" \"--output_split_test\", output_split_test],\n",
|
" \"--output_split_test\", output_split_test],\n",
|
||||||
" inputs=[transformed_data.parse_parquet_files(file_extension=None)],\n",
|
" inputs=[transformed_data.parse_parquet_files()],\n",
|
||||||
" outputs=[output_split_train, output_split_test],\n",
|
" outputs=[output_split_train, output_split_test],\n",
|
||||||
" compute_target=aml_compute,\n",
|
" compute_target=aml_compute,\n",
|
||||||
" runconfig = aml_run_config,\n",
|
" runconfig = aml_run_config,\n",
|
||||||
@@ -690,7 +690,7 @@
|
|||||||
" \"n_cross_validations\": 5\n",
|
" \"n_cross_validations\": 5\n",
|
||||||
"}\n",
|
"}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"training_dataset = output_split_train.parse_parquet_files(file_extension=None).keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])\n",
|
"training_dataset = output_split_train.parse_parquet_files().keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"automl_config = AutoMLConfig(task = 'regression',\n",
|
"automl_config = AutoMLConfig(task = 'regression',\n",
|
||||||
" debug_log = 'automated_ml_errors.log',\n",
|
" debug_log = 'automated_ml_errors.log',\n",
|
||||||
|
|||||||
@@ -180,7 +180,9 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a FileDataset\n",
|
"### Create a FileDataset\n",
|
||||||
"A [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred."
|
"A [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred.",
|
||||||
|
"\n",
|
||||||
|
"You can use dataset objects as inputs. Register the datasets to the workspace if you want to reuse them later."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -160,7 +160,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a TabularDataset\n",
|
"### Create a TabularDataset\n",
|
||||||
"A [TabularDataSet](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) references single or multiple files which contain data in a tabular structure (ie like CSV files) in your datastores or public urls. TabularDatasets provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred."
|
"A [TabularDataSet](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) references single or multiple files which contain data in a tabular structure (ie like CSV files) in your datastores or public urls. TabularDatasets provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred.\n",
|
||||||
|
"You can use dataset objects as inputs. Register the datasets to the workspace if you want to reuse them later."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -175,8 +176,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"path_on_datastore = iris_data.path('iris/')\n",
|
"path_on_datastore = iris_data.path('iris/')\n",
|
||||||
"input_iris_ds = Dataset.Tabular.from_delimited_files(path=path_on_datastore, validate=False)\n",
|
"input_iris_ds = Dataset.Tabular.from_delimited_files(path=path_on_datastore, validate=False)\n",
|
||||||
"registered_iris_ds = input_iris_ds.register(ws, iris_ds_name, create_new_version=True)\n",
|
"named_iris_ds = input_iris_ds.as_named_input(iris_ds_name)"
|
||||||
"named_iris_ds = registered_iris_ds.as_named_input(iris_ds_name)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -136,7 +136,7 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" compute_target.wait_for_completion(show_output=True)\n",
|
"compute_target.wait_for_completion(show_output=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -606,14 +606,32 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** `print(service.get_logs())`"
|
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"print(service.get_logs())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"This is the scoring web service endpoint: `print(service.scoring_uri)`"
|
"This is the scoring web service endpoint:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"print(service.scoring_uri)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -742,7 +760,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -308,9 +308,9 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # can poll for a minimum number of nodes and for a specific timeout. \n",
|
"# can poll for a minimum number of nodes and for a specific timeout. \n",
|
||||||
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
"# if no min node count is provided it uses the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -429,7 +429,8 @@
|
|||||||
"dependencies:\n",
|
"dependencies:\n",
|
||||||
"- python=3.6.2\n",
|
"- python=3.6.2\n",
|
||||||
"- pip:\n",
|
"- pip:\n",
|
||||||
" - azureml-defaults==1.13.0\n",
|
" - h5py<=2.10.0\n",
|
||||||
|
" - azureml-defaults\n",
|
||||||
" - tensorflow-gpu==2.0.0\n",
|
" - tensorflow-gpu==2.0.0\n",
|
||||||
" - keras<=2.3.1\n",
|
" - keras<=2.3.1\n",
|
||||||
" - matplotlib"
|
" - matplotlib"
|
||||||
@@ -981,6 +982,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"cd = CondaDependencies.create()\n",
|
"cd = CondaDependencies.create()\n",
|
||||||
"cd.add_tensorflow_conda_package()\n",
|
"cd.add_tensorflow_conda_package()\n",
|
||||||
|
"cd.add_conda_package('h5py<=2.10.0')\n",
|
||||||
"cd.add_conda_package('keras<=2.3.1')\n",
|
"cd.add_conda_package('keras<=2.3.1')\n",
|
||||||
"cd.add_pip_package(\"azureml-defaults\")\n",
|
"cd.add_pip_package(\"azureml-defaults\")\n",
|
||||||
"cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')\n",
|
"cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')\n",
|
||||||
@@ -1031,7 +1033,16 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** `print(service.get_logs())`"
|
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:**"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"print(service.get_logs())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -128,7 +128,7 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" compute_target.wait_for_completion(show_output=True)\n",
|
"compute_target.wait_for_completion(show_output=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -714,7 +714,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -153,9 +153,9 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # can poll for a minimum number of nodes and for a specific timeout. \n",
|
"# can poll for a minimum number of nodes and for a specific timeout. \n",
|
||||||
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
"# if no min node count is provided it uses the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -572,7 +572,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -306,9 +306,9 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # can poll for a minimum number of nodes and for a specific timeout. \n",
|
"# can poll for a minimum number of nodes and for a specific timeout. \n",
|
||||||
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
"# if no min node count is provided it uses the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -852,7 +852,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -322,9 +322,9 @@
|
|||||||
" # create the cluster\n",
|
" # create the cluster\n",
|
||||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # can poll for a minimum number of nodes and for a specific timeout. \n",
|
"# can poll for a minimum number of nodes and for a specific timeout. \n",
|
||||||
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
"# if no min node count is provided it uses the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.get_status().serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
@@ -1135,7 +1135,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "swatig"
|
"name": "nagaur"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "training",
|
"category": "training",
|
||||||
|
|||||||
@@ -30,7 +30,6 @@ Using these samples, you will learn how to do the following.
|
|||||||
|
|
||||||
| File/folder | Description |
|
| File/folder | Description |
|
||||||
|-------------------|--------------------------------------------|
|
|-------------------|--------------------------------------------|
|
||||||
| [devenv_setup.ipynb](setup/devenv_setup.ipynb) | Notebook to setup virtual network for using Azure Machine Learning. Needed for the Pong and Minecraft examples. |
|
|
||||||
| [cartpole_ci.ipynb](cartpole-on-compute-instance/cartpole_ci.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Instance |
|
| [cartpole_ci.ipynb](cartpole-on-compute-instance/cartpole_ci.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Instance |
|
||||||
| [cartpole_sc.ipynb](cartpole-on-single-compute/cartpole_sc.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Cluster (single node) |
|
| [cartpole_sc.ipynb](cartpole-on-single-compute/cartpole_sc.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Cluster (single node) |
|
||||||
| [pong_rllib.ipynb](atari-on-distributed-compute/pong_rllib.ipynb) | Notebook for distributed training of Pong agent using RLlib on multiple compute targets |
|
| [pong_rllib.ipynb](atari-on-distributed-compute/pong_rllib.ipynb) | Notebook for distributed training of Pong agent using RLlib on multiple compute targets |
|
||||||
@@ -46,9 +45,7 @@ To make use of these samples, you need the following.
|
|||||||
* An Azure Machine Learning Workspace in the resource group.
|
* An Azure Machine Learning Workspace in the resource group.
|
||||||
* Azure Machine Learning training compute. These samples use the VM sizes `STANDARD_NC6` and `STANDARD_D2_V2`. If these are not available in your region,
|
* Azure Machine Learning training compute. These samples use the VM sizes `STANDARD_NC6` and `STANDARD_D2_V2`. If these are not available in your region,
|
||||||
you can replace them with other sizes.
|
you can replace them with other sizes.
|
||||||
* A virtual network set up in the resource group for samples that use multiple compute targets. The Cartpole examples do not need a virtual network.
|
* A virtual network set up in the resource group for samples that use multiple compute targets. The Cartpole and Multi-agent Particle examples do not need a virtual network. Any network security group defined on the virtual network must allow network traffic on ports used by Azure infrastructure services. Sample instructions are provided in Atari Pong and Minecraft example notebooks.
|
||||||
* The [devenv_setup.ipynb](setup/devenv_setup.ipynb) notebook shows you how to create a virtual network. You can alternatively use an existing virtual network, make sure it's in the same region as workspace is.
|
|
||||||
* Any network security group defined on the virtual network must allow network traffic on ports used by Azure infrastructure services. This is described in more detail in the [devenv_setup.ipynb](setup/devenv_setup.ipynb) notebook.
|
|
||||||
|
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
|
|||||||
@@ -57,7 +57,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Prerequisite\n",
|
"## Prerequisite\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The user should have completed the [Reinforcement Learning in Azure Machine Learning - Setting Up Development Environment](../setup/devenv_setup.ipynb) to setup a virtual network. This virtual network will be used here for head and worker compute targets. It is highly recommended that the user should go through the [Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb) to understand the basics of Reinforcement Learning in Azure Machine Learning and Ray RLlib used in this notebook."
|
"It is highly recommended that the user should go through the [Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb) to understand the basics of Reinforcement Learning in Azure Machine Learning and Ray RLlib used in this notebook."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -69,6 +69,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"* Connecting to a workspace to enable communication between your local machine and remote resources\n",
|
"* Connecting to a workspace to enable communication between your local machine and remote resources\n",
|
||||||
"* Creating an experiment to track all your runs\n",
|
"* Creating an experiment to track all your runs\n",
|
||||||
|
"* Setting up a virtual network\n",
|
||||||
"* Creating remote head and worker compute target on a virtual network to use for training"
|
"* Creating remote head and worker compute target on a virtual network to use for training"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -140,9 +141,13 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Specify the name of your virtual network\n",
|
"### Create Virtual Network\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The resource group you use must contain a virtual network. Specify the name of the virtual network here created in the [Azure Machine Learning Reinforcement Learning Sample - Setting Up Development Environment](../setup/devenv_setup.ipynb)."
|
"If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.\n",
|
||||||
|
"\n",
|
||||||
|
"To do this, you first must install the Azure Networking API.\n",
|
||||||
|
"\n",
|
||||||
|
"`pip install --upgrade azure-mgmt-network`"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -151,15 +156,132 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
|
||||||
|
"#!pip install --upgrade azure-mgmt-network"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azure.mgmt.network import NetworkManagementClient\n",
|
||||||
|
"\n",
|
||||||
"# Virtual network name\n",
|
"# Virtual network name\n",
|
||||||
"vnet_name = 'your_vnet'"
|
"vnet_name =\"your_vnet\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Default subnet\n",
|
||||||
|
"subnet_name =\"default\"\n",
|
||||||
|
"\n",
|
||||||
|
"# The Azure subscription you are using\n",
|
||||||
|
"subscription_id=ws.subscription_id\n",
|
||||||
|
"\n",
|
||||||
|
"# The resource group for the reinforcement learning cluster\n",
|
||||||
|
"resource_group=ws.resource_group\n",
|
||||||
|
"\n",
|
||||||
|
"# Azure region of the resource group\n",
|
||||||
|
"location=ws.location\n",
|
||||||
|
"\n",
|
||||||
|
"network_client = NetworkManagementClient(ws._auth_object, subscription_id)\n",
|
||||||
|
"\n",
|
||||||
|
"async_vnet_creation = network_client.virtual_networks.create_or_update(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" vnet_name,\n",
|
||||||
|
" {\n",
|
||||||
|
" 'location': location,\n",
|
||||||
|
" 'address_space': {\n",
|
||||||
|
" 'address_prefixes': ['10.0.0.0/16']\n",
|
||||||
|
" }\n",
|
||||||
|
" }\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_vnet_creation.wait()\n",
|
||||||
|
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
|
"### Set up Network Security Group on Virtual Network\n",
|
||||||
|
"\n",
|
||||||
|
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).\n",
|
||||||
|
"\n",
|
||||||
|
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
|
||||||
|
"\n",
|
||||||
|
"You may need to modify the code below to match your scenario."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import azure.mgmt.network.models\n",
|
||||||
|
"\n",
|
||||||
|
"security_group_name = vnet_name + '-' + \"nsg\"\n",
|
||||||
|
"security_rule_name = \"AllowAML\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Create a network security group\n",
|
||||||
|
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
|
||||||
|
" location=location,\n",
|
||||||
|
" security_rules=[\n",
|
||||||
|
" azure.mgmt.network.models.SecurityRule(\n",
|
||||||
|
" name=security_rule_name,\n",
|
||||||
|
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
|
||||||
|
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
|
||||||
|
" destination_address_prefix='*',\n",
|
||||||
|
" destination_port_range='29876-29877',\n",
|
||||||
|
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
|
||||||
|
" priority=400,\n",
|
||||||
|
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
|
||||||
|
" source_address_prefix='BatchNodeManagement',\n",
|
||||||
|
" source_port_range='*'\n",
|
||||||
|
" ),\n",
|
||||||
|
" ],\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_nsg_creation = network_client.network_security_groups.create_or_update(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" security_group_name,\n",
|
||||||
|
" nsg_params,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_nsg_creation.wait() \n",
|
||||||
|
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
|
||||||
|
"\n",
|
||||||
|
"network_security_group = network_client.network_security_groups.get(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" security_group_name,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"# Define a subnet to be created with network security group\n",
|
||||||
|
"subnet = azure.mgmt.network.models.Subnet(\n",
|
||||||
|
" id='default',\n",
|
||||||
|
" address_prefix='10.0.0.0/24',\n",
|
||||||
|
" network_security_group=network_security_group\n",
|
||||||
|
" )\n",
|
||||||
|
" \n",
|
||||||
|
"# Create subnet on virtual network\n",
|
||||||
|
"async_subnet_creation = network_client.subnets.create_or_update(\n",
|
||||||
|
" resource_group_name=resource_group,\n",
|
||||||
|
" virtual_network_name=vnet_name,\n",
|
||||||
|
" subnet_name=subnet_name,\n",
|
||||||
|
" subnet_parameters=subnet\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_subnet_creation.wait()\n",
|
||||||
|
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Review the virtual network security rules\n",
|
||||||
|
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -152,6 +152,9 @@
|
|||||||
"from azureml.core.compute import ComputeInstance\n",
|
"from azureml.core.compute import ComputeInstance\n",
|
||||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"import random\n",
|
||||||
|
"import string\n",
|
||||||
|
"\n",
|
||||||
"# Load current compute instance info\n",
|
"# Load current compute instance info\n",
|
||||||
"current_compute_instance = load_nbvm()\n",
|
"current_compute_instance = load_nbvm()\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -160,7 +163,8 @@
|
|||||||
" print(\"Current compute instance:\", current_compute_instance)\n",
|
" print(\"Current compute instance:\", current_compute_instance)\n",
|
||||||
" instance_name = current_compute_instance['instance']\n",
|
" instance_name = current_compute_instance['instance']\n",
|
||||||
"else:\n",
|
"else:\n",
|
||||||
" instance_name = \"cartpole-ci-stdd2v2\"\n",
|
" # Compute instance name needs to be unique across all existing compute instances within an Azure region\n",
|
||||||
|
" instance_name = \"cartpole-ci-\" + \"\".join(random.choice(string.ascii_lowercase) for _ in range(5))\n",
|
||||||
" try:\n",
|
" try:\n",
|
||||||
" instance = ComputeInstance(workspace=ws, name=instance_name)\n",
|
" instance = ComputeInstance(workspace=ws, name=instance_name)\n",
|
||||||
" print('Found existing instance, use it.')\n",
|
" print('Found existing instance, use it.')\n",
|
||||||
@@ -176,7 +180,7 @@
|
|||||||
"compute_target = ws.compute_targets[instance_name]\n",
|
"compute_target = ws.compute_targets[instance_name]\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"Compute target status:\")\n",
|
"print(\"Compute target status:\")\n",
|
||||||
"print(compute_target.get_status().serialize())\n"
|
"print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -77,11 +77,6 @@
|
|||||||
"workspace. For detailed instructions see [Tutorial: Get started creating\n",
|
"workspace. For detailed instructions see [Tutorial: Get started creating\n",
|
||||||
"your first ML experiment.](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup)\n",
|
"your first ML experiment.](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In addition, please follow the instructions in the [Reinforcement Learning in\n",
|
|
||||||
"Azure Machine Learning - Setting Up Development Environment](../setup/devenv_setup.ipynb)\n",
|
|
||||||
"notebook to correctly set up a Virtual Network which is required for completing \n",
|
|
||||||
"this tutorial.\n",
|
|
||||||
"\n",
|
|
||||||
"While this is a standalone notebook, we highly recommend going over the\n",
|
"While this is a standalone notebook, we highly recommend going over the\n",
|
||||||
"introductory notebooks for RL first.\n",
|
"introductory notebooks for RL first.\n",
|
||||||
"- Getting started:\n",
|
"- Getting started:\n",
|
||||||
@@ -96,6 +91,7 @@
|
|||||||
"This includes:\n",
|
"This includes:\n",
|
||||||
"- Connecting to your existing Azure Machine Learning workspace.\n",
|
"- Connecting to your existing Azure Machine Learning workspace.\n",
|
||||||
"- Creating an experiment to track runs.\n",
|
"- Creating an experiment to track runs.\n",
|
||||||
|
"- Setting up a virtual network\n",
|
||||||
"- Creating remote compute targets for [Ray](https://docs.ray.io/en/latest/index.html).\n",
|
"- Creating remote compute targets for [Ray](https://docs.ray.io/en/latest/index.html).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Azure Machine Learning SDK\n",
|
"### Azure Machine Learning SDK\n",
|
||||||
@@ -161,6 +157,164 @@
|
|||||||
"exp = Experiment(workspace=ws, name='minecraft-maze')"
|
"exp = Experiment(workspace=ws, name='minecraft-maze')"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Create Virtual Network\n",
|
||||||
|
"\n",
|
||||||
|
"If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.\n",
|
||||||
|
"\n",
|
||||||
|
"To do this, you first must install the Azure Networking API.\n",
|
||||||
|
"\n",
|
||||||
|
"`pip install --upgrade azure-mgmt-network`"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
|
||||||
|
"#!pip install --upgrade azure-mgmt-network"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azure.mgmt.network import NetworkManagementClient\n",
|
||||||
|
"\n",
|
||||||
|
"# Virtual network name\n",
|
||||||
|
"vnet_name =\"your_vnet\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Default subnet\n",
|
||||||
|
"subnet_name =\"default\"\n",
|
||||||
|
"\n",
|
||||||
|
"# The Azure subscription you are using\n",
|
||||||
|
"subscription_id=ws.subscription_id\n",
|
||||||
|
"\n",
|
||||||
|
"# The resource group for the reinforcement learning cluster\n",
|
||||||
|
"resource_group=ws.resource_group\n",
|
||||||
|
"\n",
|
||||||
|
"# Azure region of the resource group\n",
|
||||||
|
"location=ws.location\n",
|
||||||
|
"\n",
|
||||||
|
"network_client = NetworkManagementClient(ws._auth_object, subscription_id)\n",
|
||||||
|
"\n",
|
||||||
|
"async_vnet_creation = network_client.virtual_networks.create_or_update(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" vnet_name,\n",
|
||||||
|
" {\n",
|
||||||
|
" 'location': location,\n",
|
||||||
|
" 'address_space': {\n",
|
||||||
|
" 'address_prefixes': ['10.0.0.0/16']\n",
|
||||||
|
" }\n",
|
||||||
|
" }\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_vnet_creation.wait()\n",
|
||||||
|
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Set up Network Security Group on Virtual Network\n",
|
||||||
|
"\n",
|
||||||
|
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).\n",
|
||||||
|
"\n",
|
||||||
|
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
|
||||||
|
"\n",
|
||||||
|
"You may need to modify the code below to match your scenario."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import azure.mgmt.network.models\n",
|
||||||
|
"\n",
|
||||||
|
"security_group_name = vnet_name + '-' + \"nsg\"\n",
|
||||||
|
"security_rule_name = \"AllowAML\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Create a network security group\n",
|
||||||
|
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
|
||||||
|
" location=location,\n",
|
||||||
|
" security_rules=[\n",
|
||||||
|
" azure.mgmt.network.models.SecurityRule(\n",
|
||||||
|
" name=security_rule_name,\n",
|
||||||
|
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
|
||||||
|
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
|
||||||
|
" destination_address_prefix='*',\n",
|
||||||
|
" destination_port_range='29876-29877',\n",
|
||||||
|
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
|
||||||
|
" priority=400,\n",
|
||||||
|
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
|
||||||
|
" source_address_prefix='BatchNodeManagement',\n",
|
||||||
|
" source_port_range='*'\n",
|
||||||
|
" ),\n",
|
||||||
|
" ],\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_nsg_creation = network_client.network_security_groups.create_or_update(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" security_group_name,\n",
|
||||||
|
" nsg_params,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_nsg_creation.wait() \n",
|
||||||
|
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
|
||||||
|
"\n",
|
||||||
|
"network_security_group = network_client.network_security_groups.get(\n",
|
||||||
|
" resource_group,\n",
|
||||||
|
" security_group_name,\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"# Define a subnet to be created with network security group\n",
|
||||||
|
"subnet = azure.mgmt.network.models.Subnet(\n",
|
||||||
|
" id='default',\n",
|
||||||
|
" address_prefix='10.0.0.0/24',\n",
|
||||||
|
" network_security_group=network_security_group\n",
|
||||||
|
" )\n",
|
||||||
|
" \n",
|
||||||
|
"# Create subnet on virtual network\n",
|
||||||
|
"async_subnet_creation = network_client.subnets.create_or_update(\n",
|
||||||
|
" resource_group_name=resource_group,\n",
|
||||||
|
" virtual_network_name=vnet_name,\n",
|
||||||
|
" subnet_name=subnet_name,\n",
|
||||||
|
" subnet_parameters=subnet\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"async_subnet_creation.wait()\n",
|
||||||
|
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Review the virtual network security rules\n",
|
||||||
|
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from files.networkutils import *\n",
|
||||||
|
"\n",
|
||||||
|
"check_vnet_security_rules(ws._auth_object, ws.subscription_id, ws.resource_group, vnet_name, True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -203,12 +357,6 @@
|
|||||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# please enter the name of your Virtual Network (see Prerequisites -> Workspace setup)\n",
|
|
||||||
"vnet_name = 'your_vnet'\n",
|
|
||||||
"\n",
|
|
||||||
"# name of the Virtual Network subnet ('default' the default name)\n",
|
|
||||||
"subnet_name = 'default'\n",
|
|
||||||
"\n",
|
|
||||||
"gpu_cluster_name = 'gpu-cl-nc6-vnet'\n",
|
"gpu_cluster_name = 'gpu-cl-nc6-vnet'\n",
|
||||||
"\n",
|
"\n",
|
||||||
"try:\n",
|
"try:\n",
|
||||||
|
|||||||
@@ -1,262 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
||||||
"\n",
|
|
||||||
"Licensed under the MIT License."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
""
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Reinforcement Learning in Azure Machine Learning - Setting Up Development Environment\n",
|
|
||||||
"\n",
|
|
||||||
"Ray multi-node cluster setup requires all worker nodes to be able to communicate with the head node. This notebook explains you how to setup a virtual network, to be used by the Ray head and worker compute targets, created and used in other notebook examples."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Prerequisite\n",
|
|
||||||
"\n",
|
|
||||||
"The user should have completed the Azure Machine Learning Tutorial: [Get started creating your first ML experiment with the Python SDK](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup). You will need to make sure that you have a valid subscription ID, a resource group, and an Azure Machine Learning workspace."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Azure Machine Learning SDK \n",
|
|
||||||
"Display the Azure Machine Learning SDK version."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import azureml.core\n",
|
|
||||||
"\n",
|
|
||||||
"print(\"Azure Machine Learning SDK Version: \", azureml.core.VERSION)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Get Azure Machine Learning workspace\n",
|
|
||||||
"Get a reference to an existing Azure Machine Learning workspace.\n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core import Workspace\n",
|
|
||||||
"\n",
|
|
||||||
"ws = Workspace.from_config()\n",
|
|
||||||
"print(ws.name, ws.location, ws.resource_group, sep = ' | ')"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Create Virtual Network\n",
|
|
||||||
"\n",
|
|
||||||
"If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.\n",
|
|
||||||
"\n",
|
|
||||||
"To do this, you first must install the Azure Networking API.\n",
|
|
||||||
"\n",
|
|
||||||
"`pip install --upgrade azure-mgmt-network`"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
|
|
||||||
"#!pip install --upgrade azure-mgmt-network"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azure.mgmt.network import NetworkManagementClient\n",
|
|
||||||
"\n",
|
|
||||||
"# Virtual network name\n",
|
|
||||||
"vnet_name =\"your_vnet\"\n",
|
|
||||||
"\n",
|
|
||||||
"# Default subnet\n",
|
|
||||||
"subnet_name =\"default\"\n",
|
|
||||||
"\n",
|
|
||||||
"# The Azure subscription you are using\n",
|
|
||||||
"subscription_id=ws.subscription_id\n",
|
|
||||||
"\n",
|
|
||||||
"# The resource group for the reinforcement learning cluster\n",
|
|
||||||
"resource_group=ws.resource_group\n",
|
|
||||||
"\n",
|
|
||||||
"# Azure region of the resource group\n",
|
|
||||||
"location=ws.location\n",
|
|
||||||
"\n",
|
|
||||||
"network_client = NetworkManagementClient(ws._auth_object, subscription_id)\n",
|
|
||||||
"\n",
|
|
||||||
"async_vnet_creation = network_client.virtual_networks.create_or_update(\n",
|
|
||||||
" resource_group,\n",
|
|
||||||
" vnet_name,\n",
|
|
||||||
" {\n",
|
|
||||||
" 'location': location,\n",
|
|
||||||
" 'address_space': {\n",
|
|
||||||
" 'address_prefixes': ['10.0.0.0/16']\n",
|
|
||||||
" }\n",
|
|
||||||
" }\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"async_vnet_creation.wait()\n",
|
|
||||||
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Set up Network Security Group on Virtual Network\n",
|
|
||||||
"\n",
|
|
||||||
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).\n",
|
|
||||||
"\n",
|
|
||||||
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
|
|
||||||
"\n",
|
|
||||||
"You may need to modify the code below to match your scenario."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import azure.mgmt.network.models\n",
|
|
||||||
"\n",
|
|
||||||
"security_group_name = vnet_name + '-' + \"nsg\"\n",
|
|
||||||
"security_rule_name = \"AllowAML\"\n",
|
|
||||||
"\n",
|
|
||||||
"# Create a network security group\n",
|
|
||||||
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
|
|
||||||
" location=location,\n",
|
|
||||||
" security_rules=[\n",
|
|
||||||
" azure.mgmt.network.models.SecurityRule(\n",
|
|
||||||
" name=security_rule_name,\n",
|
|
||||||
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
|
|
||||||
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
|
|
||||||
" destination_address_prefix='*',\n",
|
|
||||||
" destination_port_range='29876-29877',\n",
|
|
||||||
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
|
|
||||||
" priority=400,\n",
|
|
||||||
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
|
|
||||||
" source_address_prefix='BatchNodeManagement',\n",
|
|
||||||
" source_port_range='*'\n",
|
|
||||||
" ),\n",
|
|
||||||
" ],\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"async_nsg_creation = network_client.network_security_groups.create_or_update(\n",
|
|
||||||
" resource_group,\n",
|
|
||||||
" security_group_name,\n",
|
|
||||||
" nsg_params,\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"async_nsg_creation.wait() \n",
|
|
||||||
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
|
|
||||||
"\n",
|
|
||||||
"network_security_group = network_client.network_security_groups.get(\n",
|
|
||||||
" resource_group,\n",
|
|
||||||
" security_group_name,\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"# Define a subnet to be created with network security group\n",
|
|
||||||
"subnet = azure.mgmt.network.models.Subnet(\n",
|
|
||||||
" id='default',\n",
|
|
||||||
" address_prefix='10.0.0.0/24',\n",
|
|
||||||
" network_security_group=network_security_group\n",
|
|
||||||
" )\n",
|
|
||||||
" \n",
|
|
||||||
"# Create subnet on virtual network\n",
|
|
||||||
"async_subnet_creation = network_client.subnets.create_or_update(\n",
|
|
||||||
" resource_group_name=resource_group,\n",
|
|
||||||
" virtual_network_name=vnet_name,\n",
|
|
||||||
" subnet_name=subnet_name,\n",
|
|
||||||
" subnet_parameters=subnet\n",
|
|
||||||
")\n",
|
|
||||||
"\n",
|
|
||||||
"async_subnet_creation.wait()\n",
|
|
||||||
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Review the virtual network security rules\n",
|
|
||||||
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from files.networkutils import *\n",
|
|
||||||
"\n",
|
|
||||||
"check_vnet_security_rules(ws._auth_object, ws.subscription_id, ws.resource_group, vnet_name, True)"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"authors": [
|
|
||||||
{
|
|
||||||
"name": "vineetg"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "Python 3.6",
|
|
||||||
"language": "python",
|
|
||||||
"name": "python36"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.6.5"
|
|
||||||
},
|
|
||||||
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00e2\u20ac\u00afLicensed under the MIT License.\u00e2\u20ac\u00af "
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 4
|
|
||||||
}
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
name: devenv_setup
|
|
||||||
dependencies:
|
|
||||||
- pip:
|
|
||||||
- azureml-sdk
|
|
||||||
@@ -100,7 +100,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# Check core SDK version number\n",
|
"# Check core SDK version number\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using SDK version 1.17.0, you are currently running version\", azureml.core.VERSION)"
|
"print(\"This notebook was created using SDK version 1.18.0, you are currently running version\", azureml.core.VERSION)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -37,7 +37,6 @@
|
|||||||
"1. [Other ways to create environments](#Other-ways-to-create-environments)\n",
|
"1. [Other ways to create environments](#Other-ways-to-create-environments)\n",
|
||||||
" 1. From existing Conda environment\n",
|
" 1. From existing Conda environment\n",
|
||||||
" 1. From Conda or pip files\n",
|
" 1. From Conda or pip files\n",
|
||||||
"1. [Estimators and environments](#Estimators-and-environments) \n",
|
|
||||||
"1. [Using environments for inferencing](#Using-environments-for-inferencing)\n",
|
"1. [Using environments for inferencing](#Using-environments-for-inferencing)\n",
|
||||||
"1. [Docker settings](#Docker-settings)\n",
|
"1. [Docker settings](#Docker-settings)\n",
|
||||||
"1. [Spark and Azure Databricks settings](#Spark-and-Azure-Databricks-settings)\n",
|
"1. [Spark and Azure Databricks settings](#Spark-and-Azure-Databricks-settings)\n",
|
||||||
@@ -424,11 +423,9 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Next steps\n",
|
"## Next steps\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Learn more about remote runs on different compute targets:\n",
|
"Train with ML frameworks on Azure ML:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"* [Train on ML Compute](../../training/train-on-amlcompute/train-on-amlcompute.ipynb)\n",
|
"* [Train with ML frameworks](../../ml-frameworks)\n",
|
||||||
"\n",
|
|
||||||
"* [Train on remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb)\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"Learn more about registering and deploying a model:\n",
|
"Learn more about registering and deploying a model:\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|||||||
1
index.md
1
index.md
@@ -129,7 +129,6 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
|
|||||||
| [cartpole_sc](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb) | | | | | | |
|
| [cartpole_sc](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb) | | | | | | |
|
||||||
| [minecraft](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.ipynb) | | | | | | |
|
| [minecraft](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.ipynb) | | | | | | |
|
||||||
| [particle](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.ipynb) | | | | | | |
|
| [particle](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.ipynb) | | | | | | |
|
||||||
| [devenv_setup](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/setup/devenv_setup.ipynb) | | | | | | |
|
|
||||||
| [Logging APIs](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb) | Logging APIs and analyzing results | None | None | None | None | None |
|
| [Logging APIs](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb) | Logging APIs and analyzing results | None | None | None | None | None |
|
||||||
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master//setup-environment/configuration.ipynb) | | | | | | |
|
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master//setup-environment/configuration.ipynb) | | | | | | |
|
||||||
| [tutorial-1st-experiment-sdk-train](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | | | | | | |
|
| [tutorial-1st-experiment-sdk-train](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | | | | | | |
|
||||||
|
|||||||
@@ -102,7 +102,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.18.0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
Reference in New Issue
Block a user