mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 09:37:04 -05:00
Compare commits
36 Commits
azureml-sd
...
azureml-sd
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
648b48fc0c | ||
|
|
04db5d93e2 | ||
|
|
4e10935701 | ||
|
|
f737db499d | ||
|
|
6b66da1558 | ||
|
|
8647aea9d9 | ||
|
|
3ee2dc3258 | ||
|
|
9f7c4ce668 | ||
|
|
036ca6ac75 | ||
|
|
0b8817ee1c | ||
|
|
b7b5576b15 | ||
|
|
c082b72b71 | ||
|
|
673e76d431 | ||
|
|
c518a04a19 | ||
|
|
2f34888716 | ||
|
|
6ca0088991 | ||
|
|
40e3856786 | ||
|
|
ddd025e83e | ||
|
|
ece4242c8f | ||
|
|
4bca2bd7db | ||
|
|
a927dbfa31 | ||
|
|
280c718f53 | ||
|
|
bf1ac2b26a | ||
|
|
954c2afbce | ||
|
|
fbf1ea5f1a | ||
|
|
84b72d904b | ||
|
|
82bb9fcac3 | ||
|
|
5c6bbacd47 | ||
|
|
90aaeea113 | ||
|
|
eeab7284c9 | ||
|
|
02fd9b685c | ||
|
|
d5c923b446 | ||
|
|
210efe022a | ||
|
|
100ab10797 | ||
|
|
1307efe7bc | ||
|
|
08d0b8cf08 |
29
Dockerfiles/1.0.15/Dockerfile
Normal file
29
Dockerfiles/1.0.15/Dockerfile
Normal file
@@ -0,0 +1,29 @@
|
||||
FROM continuumio/miniconda:4.5.11
|
||||
|
||||
# install git
|
||||
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
|
||||
|
||||
# create a new conda environment named azureml
|
||||
RUN conda create -n azureml -y -q Python=3.6
|
||||
|
||||
# install additional packages used by sample notebooks. this is optional
|
||||
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
|
||||
|
||||
# install azurmel-sdk components
|
||||
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.15"]
|
||||
|
||||
# clone Azure ML GitHub sample notebooks
|
||||
RUN cd /home && git clone -b "azureml-sdk-1.0.15" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
|
||||
|
||||
# generate jupyter configuration file
|
||||
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
|
||||
|
||||
# set an emtpy token for Jupyter to remove authentication.
|
||||
# this is NOT recommended for production environment
|
||||
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
|
||||
|
||||
# open up port 8887 on the container
|
||||
EXPOSE 8887
|
||||
|
||||
# start Jupyter notebook server on port 8887 when the container starts
|
||||
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"
|
||||
29
Dockerfiles/1.0.17/Dockerfile
Normal file
29
Dockerfiles/1.0.17/Dockerfile
Normal file
@@ -0,0 +1,29 @@
|
||||
FROM continuumio/miniconda:4.5.11
|
||||
|
||||
# install git
|
||||
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
|
||||
|
||||
# create a new conda environment named azureml
|
||||
RUN conda create -n azureml -y -q Python=3.6
|
||||
|
||||
# install additional packages used by sample notebooks. this is optional
|
||||
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
|
||||
|
||||
# install azurmel-sdk components
|
||||
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.17"]
|
||||
|
||||
# clone Azure ML GitHub sample notebooks
|
||||
RUN cd /home && git clone -b "azureml-sdk-1.0.17" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
|
||||
|
||||
# generate jupyter configuration file
|
||||
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
|
||||
|
||||
# set an emtpy token for Jupyter to remove authentication.
|
||||
# this is NOT recommended for production environment
|
||||
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
|
||||
|
||||
# open up port 8887 on the container
|
||||
EXPOSE 8887
|
||||
|
||||
# start Jupyter notebook server on port 8887 when the container starts
|
||||
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"
|
||||
19
README.md
19
README.md
@@ -1,9 +1,6 @@
|
||||
# Azure Machine Learning service example notebooks
|
||||
|
||||
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK
|
||||
which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK
|
||||
allows you the choice of using local or cloud compute resources, while managing
|
||||
and maintaining the complete data science workflow from the cloud.
|
||||
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
|
||||
|
||||

|
||||
|
||||
@@ -18,16 +15,17 @@ You should always run the [Configuration](./configuration.ipynb) notebook first
|
||||
|
||||
If you want to...
|
||||
|
||||
* ...try out and explore Azure ML, start with image classification tutorials [part 1 training](./tutorials/img-classification-part1-training.ipynb) and [part 2 deployment](./tutorials/img-classification-part2-deploy.ipynb).
|
||||
* ...try out and explore Azure ML, start with image classification tutorials: [Part 1 (Training)](./tutorials/img-classification-part1-training.ipynb) and [Part 2 (Deployment)](./tutorials/img-classification-part2-deploy.ipynb).
|
||||
* ...prepare your data and do automated machine learning, start with regression tutorials: [Part 1 (Data Prep)](./tutorials/regression-part1-data-prep.ipynb) and [Part 2 (Automated ML)](./tutorials/regression-part2-automated-ml.ipynb).
|
||||
* ...learn about experimentation and tracking run history, first [train within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) and [using logging APIs](./how-to-use-azureml/training/logging-api/logging-api.ipynb).
|
||||
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
|
||||
* ...deploy model as realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [register and manage models, and create Docker images](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), and [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
|
||||
* ...deploy models as batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), learn how to [register and manage models](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](./how-to-use-azureml/machine-learning-pipelines/pipeline-mpi-batch-prediction.ipynb).
|
||||
* ...deploy models as a realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [register and manage models, and create Docker images](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), and [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
|
||||
* ...deploy models as a batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), learn how to [register and manage models](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](./how-to-use-azureml/machine-learning-pipelines/pipeline-mpi-batch-prediction.ipynb).
|
||||
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) and [model data collection](./how-to-use-azureml/deployment/enable-data-collection-for-models-in-aks/enable-data-collection-for-models-in-aks.ipynb).
|
||||
|
||||
## Tutorials
|
||||
|
||||
The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs)
|
||||
The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs).
|
||||
|
||||
## How to use Azure ML
|
||||
|
||||
@@ -45,9 +43,8 @@ The [How to use Azure ML](./how-to-use-azureml) folder contains specific example
|
||||
## Documentation
|
||||
|
||||
* Quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
|
||||
|
||||
* [Python SDK reference]( https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py)
|
||||
|
||||
* [Python SDK reference](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py)
|
||||
* Azure ML Data Prep SDK [overview](https://aka.ms/data-prep-sdk), [Python SDK reference](https://aka.ms/aml-data-prep-apiref), and [tutorials and how-tos](https://aka.ms/aml-data-prep-notebooks).
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -96,7 +96,7 @@
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"This notebook was created using version 1.0.15 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.0.18 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The [RAPIDS](https://www.developer.nvidia.com/rapids) suite of software libraries from NVIDIA enables the execution of end-to-end data science and analytics pipelines entirely on GPUs. In many machine learning projects, a significant portion of the model training time is spent in setting up the data; this stage of the process is known as Extraction, Transformation and Loading, or ETL. By using the DataFrame API for ETL and GPU-capable ML algorithms in RAPIDS, data preparation and training models can be done in GPU-accelerated end-to-end pipelines without incurring serialization costs between the pipeline stages. This notebook demonstrates how to use NVIDIA RAPIDS to prepare data and train model in Azure.\n",
|
||||
"The [RAPIDS](https://www.developer.nvidia.com/rapids) suite of software libraries from NVIDIA enables the execution of end-to-end data science and analytics pipelines entirely on GPUs. In many machine learning projects, a significant portion of the model training time is spent in setting up the data; this stage of the process is known as Extraction, Transformation and Loading, or ETL. By using the DataFrame API for ETL\u00c2\u00a0and GPU-capable ML algorithms in RAPIDS, data preparation and training models can be done in GPU-accelerated end-to-end pipelines without incurring serialization costs between the pipeline stages. This notebook demonstrates how to use NVIDIA RAPIDS to prepare data and train model\u00c2\u00a0in Azure.\n",
|
||||
" \n",
|
||||
"In this notebook, we will do the following:\n",
|
||||
" \n",
|
||||
@@ -406,4 +406,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
google-site-verification: googleade5d7141b3f2910.html
|
||||
@@ -4,8 +4,9 @@ Learn how to use Azure Machine Learning services for experimentation and model m
|
||||
|
||||
As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order.
|
||||
|
||||
* [train-within-notebook](./training/train-within-notebook): Train a model hile tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
|
||||
* [train-on-local](./training/train-on-local): Learn how to submit a run and use Azure ML managed run configuration.
|
||||
* [train-within-notebook](./training/train-within-notebook): Train a model hile tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
|
||||
* [train-on-local](./training/train-on-local): Learn how to submit a run to local computer and use Azure ML managed run configuration.
|
||||
* [train-on-amlcompute](./training/train-on-amlcompute): Use a 1-n node Azure ML managed compute cluster for remote runs on Azure CPU or GPU infrastructure.
|
||||
* [train-on-remote-vm](./training/train-on-remote-vm): Use Data Science Virtual Machine as a target for remote runs.
|
||||
* [logging-api](./training/logging-api): Learn about the details of logging metrics to run history.
|
||||
* [register-model-create-image-deploy-service](./deployment/register-model-create-image-deploy-service): Learn about the details of model management.
|
||||
@@ -13,4 +14,4 @@ As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) not
|
||||
* [enable-data-collection-for-models-in-aks](./deployment/enable-data-collection-for-models-in-aks) Learn about data collection APIs for deployed model.
|
||||
* [enable-app-insights-in-production-service](./deployment/enable-app-insights-in-production-service) Learn how to use App Insights with production web service.
|
||||
|
||||
Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
|
||||
Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
|
||||
|
||||
@@ -25,7 +25,7 @@ Below are the three execution environments supported by AutoML.
|
||||
|
||||
1. [](https://aka.ms/aml-clone-azure-notebooks)
|
||||
[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks.
|
||||
1. Follow the instructions in the [configuration](configuration.ipynb) notebook to create and connect to a workspace.
|
||||
1. Follow the instructions in the [configuration](../../configuration.ipynb) notebook to create and connect to a workspace.
|
||||
1. Open one of the sample notebooks.
|
||||
|
||||
<a name="databricks"></a>
|
||||
@@ -90,7 +90,7 @@ bash automl_setup_linux.sh
|
||||
```
|
||||
|
||||
### 4. Running configuration.ipynb
|
||||
- Before running any samples you next need to run the configuration notebook. Click on configuration.ipynb notebook
|
||||
- Before running any samples you next need to run the configuration notebook. Click on [configuration](../../configuration.ipynb) notebook
|
||||
- Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (*instructions in notebook*)
|
||||
|
||||
### 5. Running Samples
|
||||
@@ -99,9 +99,6 @@ bash automl_setup_linux.sh
|
||||
|
||||
<a name="samples"></a>
|
||||
# Automated ML SDK Sample Notebooks
|
||||
- [configuration.ipynb](configuration.ipynb)
|
||||
- Create new Azure ML Workspace
|
||||
- Save Workspace configuration file
|
||||
|
||||
- [auto-ml-classification.ipynb](classification/auto-ml-classification.ipynb)
|
||||
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
|
||||
@@ -122,7 +119,7 @@ bash automl_setup_linux.sh
|
||||
- Retrieving models for any iteration or logged metric
|
||||
- Specify automl settings as kwargs
|
||||
|
||||
- [auto-ml-remote-batchai.ipynb](remote-batchai/auto-ml-remote-batchai.ipynb)
|
||||
- [auto-ml-remote-amlcompute.ipynb](remote-batchai/auto-ml-remote-amlcompute.ipynb)
|
||||
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
|
||||
- Example of using automated ML for classification using remote AmlCompute for training
|
||||
- Parallel execution of iterations
|
||||
@@ -232,6 +229,9 @@ If a sample notebook fails with an error that property, method or library does n
|
||||
1) Check that you have selected correct kernel in jupyter notebook. The kernel is displayed in the top right of the notebook page. It can be changed using the `Kernel | Change Kernel` menu option. For Azure Notebooks, it should be `Python 3.6`. For local conda environments, it should be the conda envioronment name that you specified in automl_setup. The default is azure_automl. Note that the kernel is saved as part of the notebook. So, if you switch to a new conda environment, you will have to select the new kernel in the notebook.
|
||||
2) Check that the notebook is for the SDK version that you are using. You can check the SDK version by executing `azureml.core.VERSION` in a jupyter notebook cell. You can download previous version of the sample notebooks from GitHub by clicking the `Branch` button, selecting the `Tags` tab and then selecting the version.
|
||||
|
||||
## Numpy import fails on Windows
|
||||
Some Windows environments see an error loading numpy with the latest Python version 3.6.8. If you see this issue, try with Python version 3.6.7.
|
||||
|
||||
## Remote run: DsvmCompute.create fails
|
||||
There are several reasons why the DsvmCompute.create can fail. The reason is usually in the error message but you have to look at the end of the error message for the detailed reason. Some common reasons are:
|
||||
1) `Compute name is invalid, it should start with a letter, be between 2 and 16 character, and only include letters (a-zA-Z), numbers (0-9) and \'-\'.` Note that underscore is not allowed in the name.
|
||||
|
||||
@@ -2,7 +2,7 @@ name: azure_automl
|
||||
dependencies:
|
||||
# The python interpreter version.
|
||||
# Currently Azure ML only supports 3.5.2 and later.
|
||||
- python=3.6
|
||||
- python>=3.5.2,<3.6.8
|
||||
- nb_conda
|
||||
- matplotlib==2.1.0
|
||||
- numpy>=1.11.0,<1.15.0
|
||||
@@ -12,9 +12,11 @@ dependencies:
|
||||
- scikit-learn>=0.18.0,<=0.19.1
|
||||
- pandas>=0.22.0,<0.23.0
|
||||
- tensorflow>=1.12.0
|
||||
- py-xgboost<=0.80
|
||||
|
||||
- pip:
|
||||
# Required packages for AzureML execution, history, and data preparation.
|
||||
- azureml-sdk[automl,notebooks,explain]
|
||||
- azureml-sdk[automl,explain]
|
||||
- azureml-widgets
|
||||
- pandas_ml
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ name: azure_automl
|
||||
dependencies:
|
||||
# The python interpreter version.
|
||||
# Currently Azure ML only supports 3.5.2 and later.
|
||||
- python=3.6
|
||||
- python>=3.5.2,<3.6.8
|
||||
- nb_conda
|
||||
- matplotlib==2.1.0
|
||||
- numpy>=1.15.3
|
||||
@@ -12,10 +12,12 @@ dependencies:
|
||||
- scikit-learn>=0.18.0,<=0.19.1
|
||||
- pandas>=0.22.0,<0.23.0
|
||||
- tensorflow>=1.12.0
|
||||
- py-xgboost<=0.80
|
||||
|
||||
- pip:
|
||||
# Required packages for AzureML execution, history, and data preparation.
|
||||
- azureml-sdk[automl,notebooks,explain]
|
||||
- azureml-sdk[automl,explain]
|
||||
- azureml-widgets
|
||||
- pandas_ml
|
||||
|
||||
|
||||
|
||||
@@ -84,9 +84,9 @@
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# choose a name for experiment\n",
|
||||
"experiment_name = 'automl-local-classification'\n",
|
||||
"experiment_name = 'automl-classification-deployment'\n",
|
||||
"# project folder\n",
|
||||
"project_folder = './sample_projects/automl-local-classification'\n",
|
||||
"project_folder = './sample_projects/automl-classification-deployment'\n",
|
||||
"\n",
|
||||
"experiment=Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
@@ -103,23 +103,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -289,8 +272,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"experiment_name = 'automl-local-classification'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)"
|
||||
]
|
||||
|
||||
@@ -100,23 +100,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -81,8 +81,8 @@
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# Choose a name for the experiment and specify the project folder.\n",
|
||||
"experiment_name = 'automl-local-classification'\n",
|
||||
"project_folder = './sample_projects/automl-local-classification'\n",
|
||||
"experiment_name = 'automl-classification'\n",
|
||||
"project_folder = './sample_projects/automl-classification'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
@@ -99,23 +99,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -49,23 +49,6 @@
|
||||
"Currently, Data Prep only supports __Ubuntu 16__ and __Red Hat Enterprise Linux 7__. We are working on supporting more linux distros."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -212,7 +195,7 @@
|
||||
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
|
||||
" dsvm_compute.wait_for_completion(show_output = True)\n",
|
||||
" print(\"Waiting one minute for ssh to be accessible\")\n",
|
||||
" time.sleep(60) # Wait for ssh to be accessible"
|
||||
" time.sleep(90) # Wait for ssh to be accessible"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -49,23 +49,6 @@
|
||||
"Currently, Data Prep only supports __Ubuntu 16__ and __Red Hat Enterprise Linux 7__. We are working on supporting more linux distros."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -70,23 +70,6 @@
|
||||
"ws = Workspace.from_config()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -147,8 +147,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Data Splitting\n",
|
||||
"For the purposes of demonstration and later forecast evaluation, we now split the data into a training and a testing set. The test set will contain the final 20 weeks of observed sales for each time-series."
|
||||
"For demonstration purposes, we extract sales time-series for just a few of the stores:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -157,19 +156,37 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ntest_periods = 20\n",
|
||||
"use_stores = [2, 5, 8]\n",
|
||||
"data_subset = data[data.Store.isin(use_stores)]\n",
|
||||
"nseries = data_subset.groupby(grain_column_names).ngroups\n",
|
||||
"print('Data subset contains {0} individual time-series.'.format(nseries))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Data Splitting\n",
|
||||
"We now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"n_test_periods = 20\n",
|
||||
"\n",
|
||||
"def split_last_n_by_grain(df, n):\n",
|
||||
" \"\"\"\n",
|
||||
" Group df by grain and split on last n rows for each group\n",
|
||||
" \"\"\"\n",
|
||||
" \"\"\"Group df by grain and split on last n rows for each group.\"\"\"\n",
|
||||
" df_grouped = (df.sort_values(time_column_name) # Sort by ascending time\n",
|
||||
" .groupby(grain_column_names, group_keys=False))\n",
|
||||
" df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])\n",
|
||||
" df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])\n",
|
||||
" return df_head, df_tail\n",
|
||||
"\n",
|
||||
"X_train, X_test = split_last_n_by_grain(data, ntest_periods)"
|
||||
"X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -187,24 +204,7 @@
|
||||
"\n",
|
||||
"AutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.\n",
|
||||
"\n",
|
||||
"You are almost ready to start an AutoML training job. We will first need to create a validation set from the existing training set (i.e. for hyper-parameter tuning): "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"nvalidation_periods = 20\n",
|
||||
"X_train, X_validate = split_last_n_by_grain(X_train, nvalidation_periods)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We also need to separate the target column from the rest of the DataFrame: "
|
||||
"You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame: "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -214,8 +214,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"target_column_name = 'Quantity'\n",
|
||||
"y_train = X_train.pop(target_column_name).values\n",
|
||||
"y_validate = X_validate.pop(target_column_name).values "
|
||||
"y_train = X_train.pop(target_column_name).values"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -224,22 +223,31 @@
|
||||
"source": [
|
||||
"## Train\n",
|
||||
"\n",
|
||||
"The AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, and the training and validation data. \n",
|
||||
"The AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. \n",
|
||||
"\n",
|
||||
"For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time and the grain column names. A time column is required for forecasting, while the grain is optional. If a grain is not given, the forecaster assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak. \n",
|
||||
"For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.\n",
|
||||
"\n",
|
||||
"The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. \n",
|
||||
"\n",
|
||||
"Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.\n",
|
||||
"\n",
|
||||
"Here is a summary of AutoMLConfig parameters used for training the OJ model:\n",
|
||||
"\n",
|
||||
"|Property|Description|\n",
|
||||
"|-|-|\n",
|
||||
"|**task**|forecasting|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
|
||||
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
|
||||
"|**X**|Training matrix of features, shape = [n_training_samples, n_features]|\n",
|
||||
"|**y**|Target values, shape = [n_training_samples, ]|\n",
|
||||
"|**X_valid**|Validation matrix of features, shape = [n_validation_samples, n_features]|\n",
|
||||
"|**y_valid**|Target values for validation, shape = [n_validation_samples, ]\n",
|
||||
"|**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]|\n",
|
||||
"|**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]|\n",
|
||||
"|**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection|\n",
|
||||
"|**enable_ensembling**|Allow AutoML to create ensembles of the best performing models\n",
|
||||
"|**debug_log**|Log file path for writing debugging information\n",
|
||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
|
||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
|
||||
"|**time_column_name**|Name of the datetime column in the input data|\n",
|
||||
"|**grain_column_names**|Name(s) of the columns defining individual series in the input data|\n",
|
||||
"|**drop_column_names**|Name(s) of columns to drop prior to modeling|\n",
|
||||
"|**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -248,10 +256,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"automl_settings = {\n",
|
||||
"time_series_settings = {\n",
|
||||
" 'time_column_name': time_column_name,\n",
|
||||
" 'grain_column_names': grain_column_names,\n",
|
||||
" 'drop_column_names': ['logQuantity']\n",
|
||||
" 'drop_column_names': ['logQuantity'],\n",
|
||||
" 'max_horizon': n_test_periods\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(task='forecasting',\n",
|
||||
@@ -260,12 +269,11 @@
|
||||
" iterations=10,\n",
|
||||
" X=X_train,\n",
|
||||
" y=y_train,\n",
|
||||
" X_valid=X_validate,\n",
|
||||
" y_valid=y_validate,\n",
|
||||
" n_cross_validations=5,\n",
|
||||
" enable_ensembling=False,\n",
|
||||
" path=project_folder,\n",
|
||||
" verbosity=logging.INFO,\n",
|
||||
" **automl_settings)"
|
||||
" **time_series_settings)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -102,23 +102,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -74,9 +74,9 @@
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# choose a name for experiment\n",
|
||||
"experiment_name = 'automl-local-classification'\n",
|
||||
"experiment_name = 'automl-model-explanation'\n",
|
||||
"# project folder\n",
|
||||
"project_folder = './sample_projects/automl-local-classification-model-explanation'\n",
|
||||
"project_folder = './sample_projects/automl-model-explanation'\n",
|
||||
"\n",
|
||||
"experiment=Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
@@ -93,23 +93,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -96,23 +96,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -0,0 +1,555 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Automated Machine Learning\n",
|
||||
"_**Remote Execution using AmlCompute**_\n",
|
||||
"\n",
|
||||
"## Contents\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Setup](#Setup)\n",
|
||||
"1. [Data](#Data)\n",
|
||||
"1. [Train](#Train)\n",
|
||||
"1. [Results](#Results)\n",
|
||||
"1. [Test](#Test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Introduction\n",
|
||||
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
|
||||
"\n",
|
||||
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||
"\n",
|
||||
"In this notebook you would see\n",
|
||||
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
||||
"2. Create or Attach existing AmlCompute to a workspace.\n",
|
||||
"3. Configure AutoML using `AutoMLConfig`.\n",
|
||||
"4. Train the model using AmlCompute\n",
|
||||
"5. Explore the results.\n",
|
||||
"6. Test the best fitted model.\n",
|
||||
"\n",
|
||||
"In addition this notebook showcases the following features\n",
|
||||
"- **Parallel** executions for iterations\n",
|
||||
"- **Asynchronous** tracking of progress\n",
|
||||
"- **Cancellation** of individual iterations or the entire run\n",
|
||||
"- Retrieving models for any iteration or logged metric\n",
|
||||
"- Specifying AutoML settings as `**kwargs`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import logging\n",
|
||||
"import os\n",
|
||||
"import csv\n",
|
||||
"\n",
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"from sklearn import datasets\n",
|
||||
"\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core.experiment import Experiment\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"from azureml.train.automl import AutoMLConfig"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# Choose a name for the run history container in the workspace.\n",
|
||||
"experiment_name = 'automl-remote-amlcompute'\n",
|
||||
"project_folder = './project'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
"output = {}\n",
|
||||
"output['SDK version'] = azureml.core.VERSION\n",
|
||||
"output['Subscription ID'] = ws.subscription_id\n",
|
||||
"output['Workspace Name'] = ws.name\n",
|
||||
"output['Resource Group'] = ws.resource_group\n",
|
||||
"output['Location'] = ws.location\n",
|
||||
"output['Project Directory'] = project_folder\n",
|
||||
"output['Experiment Name'] = experiment.name\n",
|
||||
"pd.set_option('display.max_colwidth', -1)\n",
|
||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create or Attach existing AmlCompute\n",
|
||||
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
|
||||
"\n",
|
||||
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
||||
"\n",
|
||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import AmlCompute\n",
|
||||
"from azureml.core.compute import ComputeTarget\n",
|
||||
"\n",
|
||||
"# Choose a name for your cluster.\n",
|
||||
"amlcompute_cluster_name = \"automlcl\"\n",
|
||||
"\n",
|
||||
"found = False\n",
|
||||
"# Check if this compute target already exists in the workspace.\n",
|
||||
"cts = ws.compute_targets\n",
|
||||
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
|
||||
" found = True\n",
|
||||
" print('Found existing compute target.')\n",
|
||||
" compute_target = cts[amlcompute_cluster_name]\n",
|
||||
" \n",
|
||||
"if not found:\n",
|
||||
" print('Creating a new compute target...')\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
|
||||
" #vm_priority = 'lowpriority', # optional\n",
|
||||
" max_nodes = 6)\n",
|
||||
"\n",
|
||||
" # Create the cluster.\n",
|
||||
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
|
||||
" \n",
|
||||
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
|
||||
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
|
||||
" compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
|
||||
" \n",
|
||||
" # For a more detailed view of current AmlCompute status, use get_status()."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"For remote executions, you need to make the data accessible from the remote compute.\n",
|
||||
"This can be done by uploading the data to DataStore.\n",
|
||||
"In this example, we upload scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data_train = datasets.load_digits()\n",
|
||||
"\n",
|
||||
"if not os.path.isdir('data'):\n",
|
||||
" os.mkdir('data')\n",
|
||||
" \n",
|
||||
"if not os.path.exists(project_folder):\n",
|
||||
" os.makedirs(project_folder)\n",
|
||||
" \n",
|
||||
"pd.DataFrame(data_train.data).to_csv(\"data/X_train.tsv\", index=False, header=False, quoting=csv.QUOTE_ALL, sep=\"\\t\")\n",
|
||||
"pd.DataFrame(data_train.target).to_csv(\"data/y_train.tsv\", index=False, header=False, sep=\"\\t\")\n",
|
||||
"\n",
|
||||
"ds = ws.get_default_datastore()\n",
|
||||
"ds.upload(src_dir='./data', target_path='bai_data', overwrite=True, show_progress=True)\n",
|
||||
"\n",
|
||||
"from azureml.core.runconfig import DataReferenceConfiguration\n",
|
||||
"dr = DataReferenceConfiguration(datastore_name=ds.name, \n",
|
||||
" path_on_datastore='bai_data', \n",
|
||||
" path_on_compute='/tmp/azureml_runs',\n",
|
||||
" mode='download', # download files from datastore to compute target\n",
|
||||
" overwrite=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to AmlCompute\n",
|
||||
"conda_run_config.target = compute_target\n",
|
||||
"conda_run_config.environment.docker.enabled = True\n",
|
||||
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"# set the data reference of the run coonfiguration\n",
|
||||
"conda_run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile $project_folder/get_data.py\n",
|
||||
"\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"def get_data():\n",
|
||||
" X_train = pd.read_csv(\"/tmp/azureml_runs/bai_data/X_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
" y_train = pd.read_csv(\"/tmp/azureml_runs/bai_data/y_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
"\n",
|
||||
" return { \"X\" : X_train.values, \"y\" : y_train[0].values }\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train\n",
|
||||
"\n",
|
||||
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
|
||||
"\n",
|
||||
"**Note:** When using AmlCompute, you can't pass Numpy arrays directly to the fit method.\n",
|
||||
"\n",
|
||||
"|Property|Description|\n",
|
||||
"|-|-|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||
"|**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.|"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"automl_settings = {\n",
|
||||
" \"iteration_timeout_minutes\": 2,\n",
|
||||
" \"iterations\": 20,\n",
|
||||
" \"n_cross_validations\": 5,\n",
|
||||
" \"primary_metric\": 'AUC_weighted',\n",
|
||||
" \"preprocess\": False,\n",
|
||||
" \"max_concurrent_iterations\": 5,\n",
|
||||
" \"verbosity\": logging.INFO\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||
" debug_log = 'automl_errors.log',\n",
|
||||
" path = project_folder,\n",
|
||||
" run_configuration=conda_run_config,\n",
|
||||
" data_script = project_folder + \"/get_data.py\",\n",
|
||||
" **automl_settings\n",
|
||||
" )\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.\n",
|
||||
"In this example, we specify `show_output = False` to suppress console output while the run is in progress."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run = experiment.submit(automl_config, show_output = False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Results\n",
|
||||
"\n",
|
||||
"#### Loading executed runs\n",
|
||||
"In case you need to load a previously executed run, enable the cell below and replace the `run_id` value."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"remote_run = AutoMLRun(experiment = experiment, run_id = 'AutoML_5db13491-c92a-4f1d-b622-8ab8d973a058')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Widget for Monitoring Runs\n",
|
||||
"\n",
|
||||
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
|
||||
"\n",
|
||||
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`\n",
|
||||
"\n",
|
||||
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(remote_run).show() "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Wait until the run finishes.\n",
|
||||
"remote_run.wait_for_completion(show_output = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"#### Retrieve All Child Runs\n",
|
||||
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"children = list(remote_run.get_children())\n",
|
||||
"metricslist = {}\n",
|
||||
"for run in children:\n",
|
||||
" properties = run.get_properties()\n",
|
||||
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
|
||||
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||
"\n",
|
||||
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
||||
"rundata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Cancelling Runs\n",
|
||||
"\n",
|
||||
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
|
||||
"# remote_run.cancel()\n",
|
||||
"\n",
|
||||
"# Cancel iteration 1 and move onto iteration 2.\n",
|
||||
"# remote_run.cancel_iteration(1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Retrieve the Best Model\n",
|
||||
"\n",
|
||||
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"best_run, fitted_model = remote_run.get_output()\n",
|
||||
"print(best_run)\n",
|
||||
"print(fitted_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Best Model Based on Any Other Metric\n",
|
||||
"Show the run and the model which has the smallest `log_loss` value:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"lookup_metric = \"log_loss\"\n",
|
||||
"best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n",
|
||||
"print(best_run)\n",
|
||||
"print(fitted_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Model from a Specific Iteration\n",
|
||||
"Show the run and the model from the third iteration:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"iteration = 3\n",
|
||||
"third_run, third_model = remote_run.get_output(iteration=iteration)\n",
|
||||
"print(third_run)\n",
|
||||
"print(third_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test\n",
|
||||
"\n",
|
||||
"#### Load Test Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"digits = datasets.load_digits()\n",
|
||||
"X_test = digits.data[:10, :]\n",
|
||||
"y_test = digits.target[:10]\n",
|
||||
"images = digits.images[:10]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Testing Our Best Fitted Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Randomly select digits and test.\n",
|
||||
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
|
||||
" print(index)\n",
|
||||
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
|
||||
" label = y_test[index]\n",
|
||||
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
|
||||
" fig = plt.figure(1, figsize=(3,3))\n",
|
||||
" ax1 = fig.add_axes((0,0,.8,.8))\n",
|
||||
" ax1.set_title(title)\n",
|
||||
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
|
||||
" plt.show()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "savitam"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -104,23 +104,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -130,7 +113,7 @@
|
||||
"1. Create a Linux DSVM in Azure, following these [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.\n",
|
||||
"2. Enter the IP address, user name and password below.\n",
|
||||
"\n",
|
||||
"**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on changing SSH ports for security reasons."
|
||||
"**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/detailed-troubleshoot-ssh-connection) on changing SSH ports for security reasons."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -67,6 +67,7 @@
|
||||
"source": [
|
||||
"import logging\n",
|
||||
"import os\n",
|
||||
"import csv\n",
|
||||
"\n",
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"import numpy as np\n",
|
||||
@@ -89,7 +90,7 @@
|
||||
"\n",
|
||||
"# Choose a name for the run history container in the workspace.\n",
|
||||
"experiment_name = 'automl-remote-amlcompute'\n",
|
||||
"project_folder = './sample_projects/automl-remote-amlcompute'\n",
|
||||
"project_folder = './project'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
@@ -106,23 +107,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -171,6 +155,51 @@
|
||||
" # For a more detailed view of current AmlCompute status, use get_status()."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"For remote executions, you need to make the data accessible from the remote compute.\n",
|
||||
"This can be done by uploading the data to DataStore.\n",
|
||||
"In this example, we upload scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data_train = datasets.load_digits()\n",
|
||||
"\n",
|
||||
"if not os.path.isdir('data'):\n",
|
||||
" os.mkdir('data')\n",
|
||||
" \n",
|
||||
"if not os.path.exists(project_folder):\n",
|
||||
" os.makedirs(project_folder)\n",
|
||||
" \n",
|
||||
"pd.DataFrame(data_train.data).to_csv(\"data/X_train.tsv\", index=False, header=False, quoting=csv.QUOTE_ALL, sep=\"\\t\")\n",
|
||||
"pd.DataFrame(data_train.target).to_csv(\"data/y_train.tsv\", index=False, header=False, sep=\"\\t\")\n",
|
||||
"\n",
|
||||
"ds = ws.get_default_datastore()\n",
|
||||
"ds.upload(src_dir='./data', target_path='bai_data', overwrite=True, show_progress=True)\n",
|
||||
"\n",
|
||||
"from azureml.core.runconfig import DataReferenceConfiguration\n",
|
||||
"dr = DataReferenceConfiguration(datastore_name=ds.name, \n",
|
||||
" path_on_datastore='bai_data', \n",
|
||||
" path_on_compute='/tmp/azureml_runs',\n",
|
||||
" mode='download', # download files from datastore to compute target\n",
|
||||
" overwrite=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -188,29 +217,13 @@
|
||||
"conda_run_config.environment.docker.enabled = True\n",
|
||||
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"# set the data reference of the run coonfiguration\n",
|
||||
"conda_run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
|
||||
"In this example, the `get_data()` function returns data using scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not os.path.exists(project_folder):\n",
|
||||
" os.makedirs(project_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -219,17 +232,13 @@
|
||||
"source": [
|
||||
"%%writefile $project_folder/get_data.py\n",
|
||||
"\n",
|
||||
"from sklearn import datasets\n",
|
||||
"from scipy import sparse\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"def get_data():\n",
|
||||
" \n",
|
||||
" digits = datasets.load_digits()\n",
|
||||
" X_train = digits.data\n",
|
||||
" y_train = digits.target\n",
|
||||
" X_train = pd.read_csv(\"/tmp/azureml_runs/bai_data/X_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
" y_train = pd.read_csv(\"/tmp/azureml_runs/bai_data/y_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
"\n",
|
||||
" return { \"X\" : X_train, \"y\" : y_train }"
|
||||
" return { \"X\" : X_train.values, \"y\" : y_train[0].values }\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -99,23 +99,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -123,7 +106,7 @@
|
||||
"### Create a Remote Linux DSVM\n",
|
||||
"Note: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
|
||||
"\n",
|
||||
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you can switch to a different port (such as 5022), you can append the port number to the address. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on this."
|
||||
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you can switch to a different port (such as 5022), you can append the port number to the address. [Read more](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/detailed-troubleshoot-ssh-connection) on this."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -145,7 +128,7 @@
|
||||
" dsvm_compute = DsvmCompute.create(ws, name=compute_target_name, provisioning_configuration=dsvm_config)\n",
|
||||
" dsvm_compute.wait_for_completion(show_output=True)\n",
|
||||
" print(\"Waiting one minute for ssh to be accessible\")\n",
|
||||
" time.sleep(60) # Wait for ssh to be accessible"
|
||||
" time.sleep(90) # Wait for ssh to be accessible"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -68,6 +68,7 @@
|
||||
"import logging\n",
|
||||
"import os\n",
|
||||
"import time\n",
|
||||
"import csv\n",
|
||||
"\n",
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"import numpy as np\n",
|
||||
@@ -90,7 +91,7 @@
|
||||
"\n",
|
||||
"# Choose a name for the run history container in the workspace.\n",
|
||||
"experiment_name = 'automl-remote-dsvm'\n",
|
||||
"project_folder = './sample_projects/automl-remote-dsvm'\n",
|
||||
"project_folder = './project'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
@@ -107,23 +108,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -153,6 +137,44 @@
|
||||
" time.sleep(90) # Wait for ssh to be accessible"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"For remote executions, you need to make the data accessible from the remote compute.\n",
|
||||
"This can be done by uploading the data to DataStore.\n",
|
||||
"In this example, we upload scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data_train = datasets.load_digits()\n",
|
||||
"\n",
|
||||
"if not os.path.isdir('data'):\n",
|
||||
" os.mkdir('data')\n",
|
||||
" \n",
|
||||
"if not os.path.exists(project_folder):\n",
|
||||
" os.makedirs(project_folder)\n",
|
||||
" \n",
|
||||
"pd.DataFrame(data_train.data).to_csv(\"data/X_train.tsv\", index=False, header=False, quoting=csv.QUOTE_ALL, sep=\"\\t\")\n",
|
||||
"pd.DataFrame(data_train.target).to_csv(\"data/y_train.tsv\", index=False, header=False, sep=\"\\t\")\n",
|
||||
"\n",
|
||||
"ds = ws.get_default_datastore()\n",
|
||||
"ds.upload(src_dir='./data', target_path='re_data', overwrite=True, show_progress=True)\n",
|
||||
"\n",
|
||||
"from azureml.core.runconfig import DataReferenceConfiguration\n",
|
||||
"dr = DataReferenceConfiguration(datastore_name=ds.name, \n",
|
||||
" path_on_datastore='re_data', \n",
|
||||
" path_on_compute='/tmp/azureml_runs',\n",
|
||||
" mode='download', # download files from datastore to compute target\n",
|
||||
" overwrite=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -168,29 +190,13 @@
|
||||
"# Set compute target to the Linux DSVM\n",
|
||||
"conda_run_config.target = dsvm_compute\n",
|
||||
"\n",
|
||||
"# set the data reference of the run coonfiguration\n",
|
||||
"conda_run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
|
||||
"In this example, the `get_data()` function returns data using scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not os.path.exists(project_folder):\n",
|
||||
" os.makedirs(project_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -199,17 +205,13 @@
|
||||
"source": [
|
||||
"%%writefile $project_folder/get_data.py\n",
|
||||
"\n",
|
||||
"from sklearn import datasets\n",
|
||||
"from scipy import sparse\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"def get_data():\n",
|
||||
" \n",
|
||||
" digits = datasets.load_digits()\n",
|
||||
" X_train = digits.data[100:,:]\n",
|
||||
" y_train = digits.target[100:]\n",
|
||||
" X_train = pd.read_csv(\"/tmp/azureml_runs/re_data/X_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
" y_train = pd.read_csv(\"/tmp/azureml_runs/re_data/y_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
"\n",
|
||||
" return { \"X\" : X_train, \"y\" : y_train }"
|
||||
" return { \"X\" : X_train.values, \"y\" : y_train[0].values }\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -75,7 +75,7 @@
|
||||
"experiment_name = 'non_sample_weight_experiment'\n",
|
||||
"sample_weight_experiment_name = 'sample_weight_experiment'\n",
|
||||
"\n",
|
||||
"project_folder = './sample_projects/automl-local-classification'\n",
|
||||
"project_folder = './sample_projects/sample_weight'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"sample_weight_experiment=Experiment(ws, sample_weight_experiment_name)\n",
|
||||
@@ -93,23 +93,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -79,9 +79,9 @@
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# choose a name for the experiment\n",
|
||||
"experiment_name = 'automl-local-missing-data'\n",
|
||||
"experiment_name = 'sparse-data-train-test-split'\n",
|
||||
"# project folder\n",
|
||||
"project_folder = './sample_projects/automl-local-missing-data'\n",
|
||||
"project_folder = './sample_projects/sparse-data-train-test-split'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
@@ -98,23 +98,6 @@
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -88,23 +88,6 @@
|
||||
"pd.DataFrame(data = output, index = ['']).T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -21,6 +21,9 @@ Notebook 6 is an Automated ML sample notebook for Classification.
|
||||
|
||||
Learn more about [how to use Azure Databricks as a development environment](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment#azure-databricks) for Azure Machine Learning service.
|
||||
|
||||
**Databricks as a Compute Target from AML Pipelines**
|
||||
You can use Azure Databricks as a compute target from [Azure Machine Learning Pipelines](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines). Take a look at this notebook for details: [aml-pipelines-use-databricks-as-compute-target.ipynb](aml-pipelines-use-databricks-as-compute-target.ipynb).
|
||||
|
||||
For more on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks).
|
||||
|
||||
**Please let us know your feedback.**
|
||||
@@ -0,0 +1,714 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Using Databricks as a Compute Target from Azure Machine Learning Pipeline\n",
|
||||
"To use Databricks as a compute target from [Azure Machine Learning Pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines), a [DatabricksStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep?view=azure-ml-py) is used. This notebook demonstrates the use of DatabricksStep in Azure Machine Learning Pipeline.\n",
|
||||
"\n",
|
||||
"The notebook will show:\n",
|
||||
"1. Running an arbitrary Databricks notebook that the customer has in Databricks workspace\n",
|
||||
"2. Running an arbitrary Python script that the customer has in DBFS\n",
|
||||
"3. Running an arbitrary Python script that is available on local computer (will upload to DBFS, and then run in Databricks) \n",
|
||||
"4. Running a JAR job that the customer has in DBFS.\n",
|
||||
"\n",
|
||||
"## Before you begin:\n",
|
||||
"\n",
|
||||
"1. **Create an Azure Databricks workspace** in the same subscription where you have your Azure Machine Learning workspace. You will need details of this workspace later on to define DatabricksStep. [Click here](https://ms.portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.Databricks%2Fworkspaces) for more information.\n",
|
||||
"2. **Create PAT (access token)**: Manually create a Databricks access token at the Azure Databricks portal. See [this](https://docs.databricks.com/api/latest/authentication.html#generate-a-token) for more information.\n",
|
||||
"3. **Add demo notebook to ADB**: This notebook has a sample you can use as is. Launch Azure Databricks attached to your Azure Machine Learning workspace and add a new notebook. \n",
|
||||
"4. **Create/attach a Blob storage** for use from ADB"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Add demo notebook to ADB Workspace\n",
|
||||
"Copy and paste the below code to create a new notebook in your ADB workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"```python\n",
|
||||
"# direct access\n",
|
||||
"dbutils.widgets.get(\"myparam\")\n",
|
||||
"p = getArgument(\"myparam\")\n",
|
||||
"print (\"Param -\\'myparam':\")\n",
|
||||
"print (p)\n",
|
||||
"\n",
|
||||
"dbutils.widgets.get(\"input\")\n",
|
||||
"i = getArgument(\"input\")\n",
|
||||
"print (\"Param -\\'input':\")\n",
|
||||
"print (i)\n",
|
||||
"\n",
|
||||
"dbutils.widgets.get(\"output\")\n",
|
||||
"o = getArgument(\"output\")\n",
|
||||
"print (\"Param -\\'output':\")\n",
|
||||
"print (o)\n",
|
||||
"\n",
|
||||
"n = i + \"/testdata.txt\"\n",
|
||||
"df = spark.read.csv(n)\n",
|
||||
"\n",
|
||||
"display (df)\n",
|
||||
"\n",
|
||||
"data = [('value1', 'value2')]\n",
|
||||
"df2 = spark.createDataFrame(data)\n",
|
||||
"\n",
|
||||
"z = o + \"/output.txt\"\n",
|
||||
"df2.write.csv(z)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Azure Machine Learning and Pipeline SDK-specific imports"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core.runconfig import JarLibrary\n",
|
||||
"from azureml.core.compute import ComputeTarget, DatabricksCompute\n",
|
||||
"from azureml.exceptions import ComputeTargetException\n",
|
||||
"from azureml.core import Workspace, Experiment\n",
|
||||
"from azureml.pipeline.core import Pipeline, PipelineData\n",
|
||||
"from azureml.pipeline.steps import DatabricksStep\n",
|
||||
"from azureml.core.datastore import Datastore\n",
|
||||
"from azureml.data.data_reference import DataReference\n",
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize Workspace\n",
|
||||
"\n",
|
||||
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Attach Databricks compute target\n",
|
||||
"Next, you need to add your Databricks workspace to Azure Machine Learning as a compute target and give it a name. You will use this name to refer to your Databricks workspace compute target inside Azure Machine Learning.\n",
|
||||
"\n",
|
||||
"- **Resource Group** - The resource group name of your Azure Machine Learning workspace\n",
|
||||
"- **Databricks Workspace Name** - The workspace name of your Azure Databricks workspace\n",
|
||||
"- **Databricks Access Token** - The access token you created in ADB\n",
|
||||
"\n",
|
||||
"**The Databricks workspace need to be present in the same subscription as your AML workspace**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Replace with your account info before running.\n",
|
||||
" \n",
|
||||
"db_compute_name=os.getenv(\"DATABRICKS_COMPUTE_NAME\", \"<my-databricks-compute-name>\") # Databricks compute name\n",
|
||||
"db_resource_group=os.getenv(\"DATABRICKS_RESOURCE_GROUP\", \"<my-db-resource-group>\") # Databricks resource group\n",
|
||||
"db_workspace_name=os.getenv(\"DATABRICKS_WORKSPACE_NAME\", \"<my-db-workspace-name>\") # Databricks workspace name\n",
|
||||
"db_access_token=os.getenv(\"DATABRICKS_ACCESS_TOKEN\", \"<my-access-token>\") # Databricks access token\n",
|
||||
" \n",
|
||||
"try:\n",
|
||||
" databricks_compute = DatabricksCompute(workspace=ws, name=db_compute_name)\n",
|
||||
" print('Compute target {} already exists'.format(db_compute_name))\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print('Compute not found, will use below parameters to attach new one')\n",
|
||||
" print('db_compute_name {}'.format(db_compute_name))\n",
|
||||
" print('db_resource_group {}'.format(db_resource_group))\n",
|
||||
" print('db_workspace_name {}'.format(db_workspace_name))\n",
|
||||
" print('db_access_token {}'.format(db_access_token))\n",
|
||||
" \n",
|
||||
" config = DatabricksCompute.attach_configuration(\n",
|
||||
" resource_group = db_resource_group,\n",
|
||||
" workspace_name = db_workspace_name,\n",
|
||||
" access_token= db_access_token)\n",
|
||||
" databricks_compute=ComputeTarget.attach(ws, db_compute_name, config)\n",
|
||||
" databricks_compute.wait_for_completion(True)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data Connections with Inputs and Outputs\n",
|
||||
"The DatabricksStep supports Azure Bloband ADLS for inputs and outputs. You also will need to define a [Secrets](https://docs.azuredatabricks.net/user-guide/secrets/index.html) scope to enable authentication to external data sources such as Blob and ADLS from Databricks.\n",
|
||||
"\n",
|
||||
"- Databricks documentation on [Azure Blob](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html)\n",
|
||||
"- Databricks documentation on [ADLS](https://docs.databricks.com/spark/latest/data-sources/azure/azure-datalake.html)\n",
|
||||
"\n",
|
||||
"### Type of Data Access\n",
|
||||
"Databricks allows to interact with Azure Blob and ADLS in two ways.\n",
|
||||
"- **Direct Access**: Databricks allows you to interact with Azure Blob or ADLS URIs directly. The input or output URIs will be mapped to a Databricks widget param in the Databricks notebook.\n",
|
||||
"- **Mounting**: You will be supplied with additional parameters and secrets that will enable you to mount your ADLS or Azure Blob input or output location in your Databricks notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Direct Access: Python sample code\n",
|
||||
"If you have a data reference named \"input\" it will represent the URI of the input and you can access it directly in the Databricks python notebook like so:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"```python\n",
|
||||
"dbutils.widgets.get(\"input\")\n",
|
||||
"y = getArgument(\"input\")\n",
|
||||
"df = spark.read.csv(y)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Mounting: Python sample code for Azure Blob\n",
|
||||
"Given an Azure Blob data reference named \"input\" the following widget params will be made available in the Databricks notebook:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"```python\n",
|
||||
"# This contains the input URI\n",
|
||||
"dbutils.widgets.get(\"input\")\n",
|
||||
"myinput_uri = getArgument(\"input\")\n",
|
||||
"\n",
|
||||
"# How to get the input datastore name inside ADB notebook\n",
|
||||
"# This contains the name of a Databricks secret (in the predefined \"amlscope\" secret scope) \n",
|
||||
"# that contians an access key or sas for the Azure Blob input (this name is obtained by appending \n",
|
||||
"# the name of the input with \"_blob_secretname\". \n",
|
||||
"dbutils.widgets.get(\"input_blob_secretname\") \n",
|
||||
"myinput_blob_secretname = getArgument(\"input_blob_secretname\")\n",
|
||||
"\n",
|
||||
"# This contains the required configuration for mounting\n",
|
||||
"dbutils.widgets.get(\"input_blob_config\")\n",
|
||||
"myinput_blob_config = getArgument(\"input_blob_config\")\n",
|
||||
"\n",
|
||||
"# Usage\n",
|
||||
"dbutils.fs.mount(\n",
|
||||
" source = myinput_uri,\n",
|
||||
" mount_point = \"/mnt/input\",\n",
|
||||
" extra_configs = {myinput_blob_config:dbutils.secrets.get(scope = \"amlscope\", key = myinput_blob_secretname)})\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Mounting: Python sample code for ADLS\n",
|
||||
"Given an ADLS data reference named \"input\" the following widget params will be made available in the Databricks notebook:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"```python\n",
|
||||
"# This contains the input URI\n",
|
||||
"dbutils.widgets.get(\"input\") \n",
|
||||
"myinput_uri = getArgument(\"input\")\n",
|
||||
"\n",
|
||||
"# This contains the client id for the service principal \n",
|
||||
"# that has access to the adls input\n",
|
||||
"dbutils.widgets.get(\"input_adls_clientid\") \n",
|
||||
"myinput_adls_clientid = getArgument(\"input_adls_clientid\")\n",
|
||||
"\n",
|
||||
"# This contains the name of a Databricks secret (in the predefined \"amlscope\" secret scope) \n",
|
||||
"# that contains the secret for the above mentioned service principal\n",
|
||||
"dbutils.widgets.get(\"input_adls_secretname\") \n",
|
||||
"myinput_adls_secretname = getArgument(\"input_adls_secretname\")\n",
|
||||
"\n",
|
||||
"# This contains the refresh url for the mounting configs\n",
|
||||
"dbutils.widgets.get(\"input_adls_refresh_url\") \n",
|
||||
"myinput_adls_refresh_url = getArgument(\"input_adls_refresh_url\")\n",
|
||||
"\n",
|
||||
"# Usage \n",
|
||||
"configs = {\"dfs.adls.oauth2.access.token.provider.type\": \"ClientCredential\",\n",
|
||||
" \"dfs.adls.oauth2.client.id\": myinput_adls_clientid,\n",
|
||||
" \"dfs.adls.oauth2.credential\": dbutils.secrets.get(scope = \"amlscope\", key =myinput_adls_secretname),\n",
|
||||
" \"dfs.adls.oauth2.refresh.url\": myinput_adls_refresh_url}\n",
|
||||
"\n",
|
||||
"dbutils.fs.mount(\n",
|
||||
" source = myinput_uri,\n",
|
||||
" mount_point = \"/mnt/output\",\n",
|
||||
" extra_configs = configs)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use Databricks from Azure Machine Learning Pipeline\n",
|
||||
"To use Databricks as a compute target from Azure Machine Learning Pipeline, a DatabricksStep is used. Let's define a datasource (via DataReference) and intermediate data (via PipelineData) to be used in DatabricksStep."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Use the default blob storage\n",
|
||||
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
|
||||
"print('Datastore {} will be used'.format(def_blob_store.name))\n",
|
||||
"\n",
|
||||
"# We are uploading a sample file in the local directory to be used as a datasource\n",
|
||||
"def_blob_store.upload_files(files=[\"./testdata.txt\"], target_path=\"dbtest\", overwrite=False)\n",
|
||||
"\n",
|
||||
"step_1_input = DataReference(datastore=def_blob_store, path_on_datastore=\"dbtest\",\n",
|
||||
" data_reference_name=\"input\")\n",
|
||||
"\n",
|
||||
"step_1_output = PipelineData(\"output\", datastore=def_blob_store)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Add a DatabricksStep\n",
|
||||
"Adds a Databricks notebook as a step in a Pipeline.\n",
|
||||
"- ***name:** Name of the Module\n",
|
||||
"- **inputs:** List of input connections for data consumed by this step. Fetch this inside the notebook using dbutils.widgets.get(\"input\")\n",
|
||||
"- **outputs:** List of output port definitions for outputs produced by this step. Fetch this inside the notebook using dbutils.widgets.get(\"output\")\n",
|
||||
"- **existing_cluster_id:** Cluster ID of an existing Interactive cluster on the Databricks workspace. If you are providing this, do not provide any of the parameters below that are used to create a new cluster such as spark_version, node_type, etc.\n",
|
||||
"- **spark_version:** Version of spark for the databricks run cluster. default value: 4.0.x-scala2.11\n",
|
||||
"- **node_type:** Azure vm node types for the databricks run cluster. default value: Standard_D3_v2\n",
|
||||
"- **num_workers:** Specifies a static number of workers for the databricks run cluster\n",
|
||||
"- **min_workers:** Specifies a min number of workers to use for auto-scaling the databricks run cluster\n",
|
||||
"- **max_workers:** Specifies a max number of workers to use for auto-scaling the databricks run cluster\n",
|
||||
"- **spark_env_variables:** Spark environment variables for the databricks run cluster (dictionary of {str:str}). default value: {'PYSPARK_PYTHON': '/databricks/python3/bin/python3'}\n",
|
||||
"- **notebook_path:** Path to the notebook in the databricks instance. If you are providing this, do not provide python script related paramaters or JAR related parameters.\n",
|
||||
"- **notebook_params:** Parameters for the databricks notebook (dictionary of {str:str}). Fetch this inside the notebook using dbutils.widgets.get(\"myparam\")\n",
|
||||
"- **python_script_path:** The path to the python script in the DBFS or S3. If you are providing this, do not provide python_script_name which is used for uploading script from local machine.\n",
|
||||
"- **python_script_params:** Parameters for the python script (list of str)\n",
|
||||
"- **main_class_name:** The name of the entry point in a JAR module. If you are providing this, do not provide any python script or notebook related parameters.\n",
|
||||
"- **jar_params:** Parameters for the JAR module (list of str)\n",
|
||||
"- **python_script_name:** name of a python script on your local machine (relative to source_directory). If you are providing this do not provide python_script_path which is used to execute a remote python script; or any of the JAR or notebook related parameters.\n",
|
||||
"- **source_directory:** folder that contains the script and other files\n",
|
||||
"- **hash_paths:** list of paths to hash to detect a change in source_directory (script file is always hashed)\n",
|
||||
"- **run_name:** Name in databricks for this run\n",
|
||||
"- **timeout_seconds:** Timeout for the databricks run\n",
|
||||
"- **runconfig:** Runconfig to use. Either pass runconfig or each library type as a separate parameter but do not mix the two\n",
|
||||
"- **maven_libraries:** maven libraries for the databricks run\n",
|
||||
"- **pypi_libraries:** pypi libraries for the databricks run\n",
|
||||
"- **egg_libraries:** egg libraries for the databricks run\n",
|
||||
"- **jar_libraries:** jar libraries for the databricks run\n",
|
||||
"- **rcran_libraries:** rcran libraries for the databricks run\n",
|
||||
"- **compute_target:** Azure Databricks compute\n",
|
||||
"- **allow_reuse:** Whether the step should reuse previous results when run with the same settings/inputs\n",
|
||||
"- **version:** Optional version tag to denote a change in functionality for the step\n",
|
||||
"\n",
|
||||
"\\* *denotes required fields* \n",
|
||||
"*You must provide exactly one of num_workers or min_workers and max_workers paramaters* \n",
|
||||
"*You must provide exactly one of databricks_compute or databricks_compute_name parameters*\n",
|
||||
"\n",
|
||||
"## Use runconfig to specify library dependencies\n",
|
||||
"You can use a runconfig to specify the library dependencies for your cluster in Databricks. The runconfig will contain a databricks section as follows:\n",
|
||||
"\n",
|
||||
"```yaml\n",
|
||||
"environment:\n",
|
||||
"# Databricks details\n",
|
||||
" databricks:\n",
|
||||
"# List of maven libraries.\n",
|
||||
" mavenLibraries:\n",
|
||||
" - coordinates: org.jsoup:jsoup:1.7.1\n",
|
||||
" repo: ''\n",
|
||||
" exclusions:\n",
|
||||
" - slf4j:slf4j\n",
|
||||
" - '*:hadoop-client'\n",
|
||||
"# List of PyPi libraries\n",
|
||||
" pypiLibraries:\n",
|
||||
" - package: beautifulsoup4\n",
|
||||
" repo: ''\n",
|
||||
"# List of RCran libraries\n",
|
||||
" rcranLibraries:\n",
|
||||
" -\n",
|
||||
"# Coordinates.\n",
|
||||
" package: ada\n",
|
||||
"# Repo\n",
|
||||
" repo: http://cran.us.r-project.org\n",
|
||||
"# List of JAR libraries\n",
|
||||
" jarLibraries:\n",
|
||||
" -\n",
|
||||
"# Coordinates.\n",
|
||||
" library: dbfs:/mnt/libraries/library.jar\n",
|
||||
"# List of Egg libraries\n",
|
||||
" eggLibraries:\n",
|
||||
" -\n",
|
||||
"# Coordinates.\n",
|
||||
" library: dbfs:/mnt/libraries/library.egg\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"You can then create a RunConfiguration object using this file and pass it as the runconfig parameter to DatabricksStep.\n",
|
||||
"```python\n",
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"\n",
|
||||
"runconfig = RunConfiguration()\n",
|
||||
"runconfig.load(path='<directory_where_runconfig_is_stored>', name='<runconfig_file_name>')\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 1. Running the demo notebook already added to the Databricks workspace\n",
|
||||
"Create a notebook in the Azure Databricks workspace, and provide the path to that notebook as the value associated with the environment variable \"DATABRICKS_NOTEBOOK_PATH\". This will then set the variable notebook_path when you run the code cell below:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"notebook_path=os.getenv(\"DATABRICKS_NOTEBOOK_PATH\", \"<my-databricks-notebook-path>\") # Databricks notebook path\n",
|
||||
"\n",
|
||||
"dbNbStep = DatabricksStep(\n",
|
||||
" name=\"DBNotebookInWS\",\n",
|
||||
" inputs=[step_1_input],\n",
|
||||
" outputs=[step_1_output],\n",
|
||||
" num_workers=1,\n",
|
||||
" notebook_path=notebook_path,\n",
|
||||
" notebook_params={'myparam': 'testparam'},\n",
|
||||
" run_name='DB_Notebook_demo',\n",
|
||||
" compute_target=databricks_compute,\n",
|
||||
" allow_reuse=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Build and submit the Experiment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#PUBLISHONLY\n",
|
||||
"#steps = [dbNbStep]\n",
|
||||
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
||||
"#pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline)\n",
|
||||
"#pipeline_run.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### View Run Details"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#PUBLISHONLY\n",
|
||||
"#from azureml.widgets import RunDetails\n",
|
||||
"#RunDetails(pipeline_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 2. Running a Python script from DBFS\n",
|
||||
"This shows how to run a Python script in DBFS. \n",
|
||||
"\n",
|
||||
"To complete this, you will need to first upload the Python script in your local machine to DBFS using the [CLI](https://docs.azuredatabricks.net/user-guide/dbfs-databricks-file-system.html). The CLI command is given below:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"dbfs cp ./train-db-dbfs.py dbfs:/train-db-dbfs.py\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"The code in the below cell assumes that you have completed the previous step of uploading the script `train-db-dbfs.py` to the root folder in DBFS."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"python_script_path = os.getenv(\"DATABRICKS_PYTHON_SCRIPT_PATH\", \"<my-databricks-python-script-path>\") # Databricks python script path\n",
|
||||
"\n",
|
||||
"dbPythonInDbfsStep = DatabricksStep(\n",
|
||||
" name=\"DBPythonInDBFS\",\n",
|
||||
" inputs=[step_1_input],\n",
|
||||
" num_workers=1,\n",
|
||||
" python_script_path=python_script_path,\n",
|
||||
" python_script_params={'--input_data'},\n",
|
||||
" run_name='DB_Python_demo',\n",
|
||||
" compute_target=databricks_compute,\n",
|
||||
" allow_reuse=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Build and submit the Experiment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#PUBLISHONLY\n",
|
||||
"#steps = [dbPythonInDbfsStep]\n",
|
||||
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
||||
"#pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline)\n",
|
||||
"#pipeline_run.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### View Run Details"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#PUBLISHONLY\n",
|
||||
"#from azureml.widgets import RunDetails\n",
|
||||
"#RunDetails(pipeline_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 3. Running a Python script in Databricks that currenlty is in local computer\n",
|
||||
"To run a Python script that is currently in your local computer, follow the instructions below. \n",
|
||||
"\n",
|
||||
"The commented out code below code assumes that you have `train-db-local.py` in the `scripts` subdirectory under the current working directory.\n",
|
||||
"\n",
|
||||
"In this case, the Python script will be uploaded first to DBFS, and then the script will be run in Databricks."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"python_script_name = \"train-db-local.py\"\n",
|
||||
"source_directory = \".\"\n",
|
||||
"\n",
|
||||
"dbPythonInLocalMachineStep = DatabricksStep(\n",
|
||||
" name=\"DBPythonInLocalMachine\",\n",
|
||||
" inputs=[step_1_input],\n",
|
||||
" num_workers=1,\n",
|
||||
" python_script_name=python_script_name,\n",
|
||||
" source_directory=source_directory,\n",
|
||||
" run_name='DB_Python_Local_demo',\n",
|
||||
" compute_target=databricks_compute,\n",
|
||||
" allow_reuse=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Build and submit the Experiment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"steps = [dbPythonInLocalMachineStep]\n",
|
||||
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
||||
"pipeline_run = Experiment(ws, 'DB_Python_Local_demo').submit(pipeline)\n",
|
||||
"pipeline_run.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### View Run Details"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(pipeline_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### 4. Running a JAR job that is alreay added in DBFS\n",
|
||||
"To run a JAR job that is already uploaded to DBFS, follow the instructions below. You will first upload the JAR file to DBFS using the [CLI](https://docs.azuredatabricks.net/user-guide/dbfs-databricks-file-system.html).\n",
|
||||
"\n",
|
||||
"The commented out code in the below cell assumes that you have uploaded `train-db-dbfs.jar` to the root folder in DBFS. You can upload `train-db-dbfs.jar` to the root folder in DBFS using this commandline so you can use `jar_library_dbfs_path = \"dbfs:/train-db-dbfs.jar\"`:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"dbfs cp ./train-db-dbfs.jar dbfs:/train-db-dbfs.jar\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"main_jar_class_name = \"com.microsoft.aeva.Main\"\n",
|
||||
"jar_library_dbfs_path = os.getenv(\"DATABRICKS_JAR_LIB_PATH\", \"<my-databricks-jar-lib-path>\") # Databricks jar library path\n",
|
||||
"\n",
|
||||
"dbJarInDbfsStep = DatabricksStep(\n",
|
||||
" name=\"DBJarInDBFS\",\n",
|
||||
" inputs=[step_1_input],\n",
|
||||
" num_workers=1,\n",
|
||||
" main_class_name=main_jar_class_name,\n",
|
||||
" jar_params={'arg1', 'arg2'},\n",
|
||||
" run_name='DB_JAR_demo',\n",
|
||||
" jar_libraries=[JarLibrary(jar_library_dbfs_path)],\n",
|
||||
" compute_target=databricks_compute,\n",
|
||||
" allow_reuse=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Build and submit the Experiment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#PUBLISHONLY\n",
|
||||
"#steps = [dbJarInDbfsStep]\n",
|
||||
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
||||
"#pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline)\n",
|
||||
"#pipeline_run.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### View Run Details"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#PUBLISHONLY\n",
|
||||
"#from azureml.widgets import RunDetails\n",
|
||||
"#RunDetails(pipeline_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Next: ADLA as a Compute Target\n",
|
||||
"To use ADLA as a compute target from Azure Machine Learning Pipeline, a AdlaStep is used. This [notebook](./aml-pipelines-use-adla-as-compute-target.ipynb) demonstrates the use of AdlaStep in Azure Machine Learning Pipeline."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "diray"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -11,13 +11,6 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -60,14 +53,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# import the Workspace class and check the azureml SDK version\n",
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config(auth = auth)\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||
"# Set auth to be used by workspace related APIs.\n",
|
||||
"# For automation or CI/CD ServicePrincipalAuthentication can be used.\n",
|
||||
"# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py\n",
|
||||
"auth = None"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -79,7 +68,7 @@
|
||||
"# import the Workspace class and check the azureml SDK version\n",
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"ws = Workspace.from_config(auth = auth)\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
@@ -350,9 +339,6 @@
|
||||
"authors": [
|
||||
{
|
||||
"name": "pasha"
|
||||
},
|
||||
{
|
||||
"name": "wamartin"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
@@ -370,9 +356,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
"version": "3.6.6"
|
||||
},
|
||||
"name": "03.Build_model_runHistory",
|
||||
"name": "build-model-run-history-03",
|
||||
"notebookId": 3836944406456339
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -20,13 +20,6 @@
|
||||
"Please Register Azure Container Instance(ACI) using Azure Portal: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services#portal in your subscription before using the SDK to deploy your ML model to ACI."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -45,15 +38,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"#'''\n",
|
||||
"ws = Workspace.from_config(auth = auth)\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
|
||||
"#'''"
|
||||
"# Set auth to be used by workspace related APIs.\n",
|
||||
"# For automation or CI/CD ServicePrincipalAuthentication can be used.\n",
|
||||
"# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py\n",
|
||||
"auth = None"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -63,18 +51,12 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)\n",
|
||||
"\n",
|
||||
"#'''\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"ws = Workspace.from_config(auth = auth)\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
|
||||
"#'''"
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -293,24 +275,14 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#comment to not delete the web service\n",
|
||||
"#myservice.delete()"
|
||||
"myservice.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "pasha"
|
||||
},
|
||||
{
|
||||
"name": "wamartin"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
@@ -328,9 +300,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
"version": "3.6.6"
|
||||
},
|
||||
"name": "04.DeploytoACI",
|
||||
"name": "deploy-to-aci-04",
|
||||
"notebookId": 3836944406456376
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -0,0 +1,236 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Azure ML & Azure Databricks notebooks by Parashar Shah.\n",
|
||||
"\n",
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This notebook uses image from ACI notebook for deploying to AKS."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set auth to be used by workspace related APIs.\n",
|
||||
"# For automation or CI/CD ServicePrincipalAuthentication can be used.\n",
|
||||
"# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py\n",
|
||||
"auth = None"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config(auth = auth)\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# List images by ws\n",
|
||||
"\n",
|
||||
"from azureml.core.image import ContainerImage\n",
|
||||
"for i in ContainerImage.list(workspace = ws):\n",
|
||||
" print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.image import Image\n",
|
||||
"myimage = Image(workspace=ws, name=\"aciws\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#create AKS compute\n",
|
||||
"#it may take 20-25 minutes to create a new cluster\n",
|
||||
"\n",
|
||||
"from azureml.core.compute import AksCompute, ComputeTarget\n",
|
||||
"\n",
|
||||
"# Use the default configuration (can also provide parameters to customize)\n",
|
||||
"prov_config = AksCompute.provisioning_configuration()\n",
|
||||
"\n",
|
||||
"aks_name = 'ps-aks-demo2' \n",
|
||||
"\n",
|
||||
"# Create the cluster\n",
|
||||
"aks_target = ComputeTarget.create(workspace = ws, \n",
|
||||
" name = aks_name, \n",
|
||||
" provisioning_configuration = prov_config)\n",
|
||||
"\n",
|
||||
"aks_target.wait_for_completion(show_output = True)\n",
|
||||
"\n",
|
||||
"print(aks_target.provisioning_state)\n",
|
||||
"print(aks_target.provisioning_errors)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import Webservice\n",
|
||||
"help( Webservice.deploy_from_image)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import Webservice, AksWebservice\n",
|
||||
"from azureml.core.image import ContainerImage\n",
|
||||
"\n",
|
||||
"#Set the web service configuration (using default here with app insights)\n",
|
||||
"aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)\n",
|
||||
"\n",
|
||||
"#unique service name\n",
|
||||
"service_name ='ps-aks-service'\n",
|
||||
"\n",
|
||||
"# Webservice creation using single command, there is a variant to use image directly as well.\n",
|
||||
"aks_service = Webservice.deploy_from_image(\n",
|
||||
" workspace=ws, \n",
|
||||
" name=service_name,\n",
|
||||
" deployment_config = aks_config,\n",
|
||||
" image = myimage,\n",
|
||||
" deployment_target = aks_target\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"aks_service.wait_for_deployment(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aks_service.deployment_status"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#for using the Web HTTP API \n",
|
||||
"print(aks_service.scoring_uri)\n",
|
||||
"print(aks_service.get_keys())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"#get the some sample data\n",
|
||||
"test_data_path = \"AdultCensusIncomeTest\"\n",
|
||||
"test = spark.read.parquet(test_data_path).limit(5)\n",
|
||||
"\n",
|
||||
"test_json = json.dumps(test.toJSON().collect())\n",
|
||||
"\n",
|
||||
"print(test_json)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#using data defined above predict if income is >50K (1) or <=50K (0)\n",
|
||||
"aks_service.run(input_data=test_json)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#comment to not delete the web service\n",
|
||||
"aks_service.delete()\n",
|
||||
"#image.delete()\n",
|
||||
"#model.delete()\n",
|
||||
"aks_target.delete() "
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "pasha"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
},
|
||||
"name": "deploy-to-aks-existingimage-05",
|
||||
"notebookId": 1030695628045968
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
||||
@@ -11,13 +11,6 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -42,7 +35,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Download AdultCensusIncome.csv from Azure CDN. This file has 32,561 rows.\n",
|
||||
"basedataurl = \"https://amldockerdatasets.azureedge.net\"\n",
|
||||
"dataurl = \"https://amldockerdatasets.azureedge.net/AdultCensusIncome.csv\"\n",
|
||||
"datafile = \"AdultCensusIncome.csv\"\n",
|
||||
"datafile_dbfs = os.path.join(\"/dbfs\", datafile)\n",
|
||||
"\n",
|
||||
@@ -50,7 +43,7 @@
|
||||
" print(\"found {} at {}\".format(datafile, datafile_dbfs))\n",
|
||||
"else:\n",
|
||||
" print(\"downloading {} to {}\".format(datafile, datafile_dbfs))\n",
|
||||
" urllib.request.urlretrieve(os.path.join(basedataurl, datafile), datafile_dbfs)"
|
||||
" urllib.request.urlretrieve(dataurl, datafile_dbfs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -152,9 +145,6 @@
|
||||
"authors": [
|
||||
{
|
||||
"name": "pasha"
|
||||
},
|
||||
{
|
||||
"name": "wamartin"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
@@ -172,9 +162,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
"version": "3.6.6"
|
||||
},
|
||||
"name": "02.Ingest_data",
|
||||
"name": "ingest-data-02",
|
||||
"notebookId": 3836944406456362
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -35,13 +35,6 @@
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -67,6 +60,18 @@
|
||||
"# workspace_region = \"<your-resource group-region>\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set auth to be used by workspace related APIs.\n",
|
||||
"# For automation or CI/CD ServicePrincipalAuthentication can be used.\n",
|
||||
"# https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py\n",
|
||||
"auth = None"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -82,6 +87,7 @@
|
||||
" subscription_id = subscription_id,\n",
|
||||
" resource_group = resource_group, \n",
|
||||
" location = workspace_region,\n",
|
||||
" auth = auth,\n",
|
||||
" exist_ok=True)"
|
||||
]
|
||||
},
|
||||
@@ -103,12 +109,13 @@
|
||||
"source": [
|
||||
"ws = Workspace(workspace_name = workspace_name,\n",
|
||||
" subscription_id = subscription_id,\n",
|
||||
" resource_group = resource_group)\n",
|
||||
" resource_group = resource_group,\n",
|
||||
" auth = auth)\n",
|
||||
"\n",
|
||||
"# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
||||
"ws.write_config()\n",
|
||||
"##if you need to give a different path/filename please use this\n",
|
||||
"##write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
|
||||
"#if you need to give a different path/filename please use this\n",
|
||||
"#write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -129,29 +136,19 @@
|
||||
"# import the Workspace class and check the azureml SDK version\n",
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"ws = Workspace.from_config(auth = auth)\n",
|
||||
"#ws = Workspace.from_config(<full path>)\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "pasha"
|
||||
},
|
||||
{
|
||||
"name": "wamartin"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
@@ -169,10 +166,10 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
"version": "3.6.6"
|
||||
},
|
||||
"name": "01.Installation_and_Configuration",
|
||||
"notebookId": 3836944406456490
|
||||
"name": "installation-and-configuration-01",
|
||||
"notebookId": 3688394266452835
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
|
||||
@@ -123,13 +123,6 @@
|
||||
"ws.get_details()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -257,7 +250,16 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Load Training Data Using DataPrep"
|
||||
"## Registering Datastore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Datastore is the way to save connection information to a storage service (e.g. Azure Blob, Azure Data Lake, Azure SQL) information to your workspace so you can access them without exposing credentials in your code. The first thing you will need to do is register a datastore, you can refer to our [python SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) on how to register datastores. __Note: for best security practices, please do not check in code that contains registering datastores with secrets into your source control__\n",
|
||||
"\n",
|
||||
"The code below registers a datastore pointing to a publicly readable blob container."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -266,19 +268,82 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Automated ML requires a dataflow, which is different from dataframe.\n",
|
||||
"#If your data is in a dataframe, please use read_pandas_dataframe to convert a dataframe to dataflow before usind dprep.\n",
|
||||
"from azureml.core import Datastore\n",
|
||||
"\n",
|
||||
"datastore_name = 'demo_training'\n",
|
||||
"Datastore.register_azure_blob_container(\n",
|
||||
" workspace = ws, \n",
|
||||
" datastore_name = datastore_name, \n",
|
||||
" container_name = 'automl-notebook-data', \n",
|
||||
" account_name = 'dprepdata'\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Below is an example on how to register a private blob container\n",
|
||||
"```python\n",
|
||||
"datastore = Datastore.register_azure_blob_container(\n",
|
||||
" workspace = ws, \n",
|
||||
" datastore_name = 'example_datastore', \n",
|
||||
" container_name = 'example-container', \n",
|
||||
" account_name = 'storageaccount',\n",
|
||||
" account_key = 'accountkey'\n",
|
||||
")\n",
|
||||
"```\n",
|
||||
"The example below shows how to register an Azure Data Lake store. Please make sure you have granted the necessary permissions for the service principal to access the data lake.\n",
|
||||
"```python\n",
|
||||
"datastore = Datastore.register_azure_data_lake(\n",
|
||||
" workspace = ws,\n",
|
||||
" datastore_name = 'example_datastore',\n",
|
||||
" store_name = 'adlsstore',\n",
|
||||
" tenant_id = 'tenant-id-of-service-principal',\n",
|
||||
" client_id = 'client-id-of-service-principal',\n",
|
||||
" client_secret = 'client-secret-of-service-principal'\n",
|
||||
")\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Load Training Data Using DataPrep"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Automated ML takes a Dataflow as input.\n",
|
||||
"\n",
|
||||
"If you are familiar with Pandas and have done your data preparation work in Pandas already, you can use the `read_pandas_dataframe` method in dprep to convert the DataFrame to a Dataflow.\n",
|
||||
"```python\n",
|
||||
"df = pd.read_csv(...)\n",
|
||||
"# apply some transforms\n",
|
||||
"dprep.read_pandas_dataframe(df, temp_folder='/path/accessible/by/both/driver/and/worker')\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"If you just need to ingest data without doing any preparation, you can directly use AzureML Data Prep (Data Prep) to do so. The code below demonstrates this scenario. Data Prep also has data preparation capabilities, we have many [sample notebooks](https://github.com/Microsoft/AMLDataPrepDocs) demonstrating the capabilities.\n",
|
||||
"\n",
|
||||
"You will get the datastore you registered previously and pass it to Data Prep for reading. The data comes from the digits dataset: `sklearn.datasets.load_digits()`. `DataPath` points to a specific location within a datastore. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.dataprep as dprep\n",
|
||||
"# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
|
||||
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
|
||||
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
|
||||
"X_train = dprep.auto_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n",
|
||||
"from azureml.data.datapath import DataPath\n",
|
||||
"\n",
|
||||
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n",
|
||||
"# and convert column types manually.\n",
|
||||
"# Here we read a comma delimited file and convert all columns to integers.\n",
|
||||
"y_train = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
|
||||
"datastore = Datastore.get(workspace = ws, name = datastore_name)\n",
|
||||
"\n",
|
||||
"X_train = dprep.read_csv(DataPath(datastore = datastore, path_on_datastore = 'X.csv')) \n",
|
||||
"y_train = dprep.read_csv(DataPath(datastore = datastore, path_on_datastore = 'y.csv')).to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -286,7 +351,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Review the Data Preparation Result\n",
|
||||
"You can peek the result of a Dataflow at any range using skip(i) and head(j). Doing so evaluates only j records for all the steps in the Dataflow, which makes it fast even against large datasets."
|
||||
"You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only j records for all the steps in the Dataflow, which makes it fast even against large datasets."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -295,7 +360,16 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"X_train.skip(1).head(5)"
|
||||
"X_train.get_profile()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"y_train.get_profile()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -333,7 +407,8 @@
|
||||
" debug_log = 'automl_errors.log',\n",
|
||||
" primary_metric = 'AUC_weighted',\n",
|
||||
" iteration_timeout_minutes = 10,\n",
|
||||
" iterations = 30,\n",
|
||||
" iterations = 5,\n",
|
||||
" preprocess = True,\n",
|
||||
" n_cross_validations = 10,\n",
|
||||
" max_concurrent_iterations = 2, #change it based on number of worker nodes\n",
|
||||
" verbosity = logging.INFO,\n",
|
||||
@@ -349,8 +424,7 @@
|
||||
"source": [
|
||||
"## Train the Models\n",
|
||||
"\n",
|
||||
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
|
||||
"In this example, we specify `show_output = True` to print currently running iterations to the console."
|
||||
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -359,7 +433,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"local_run = experiment.submit(automl_config, show_output = True) # for higher runs please use show_output=False and use the below"
|
||||
"local_run = experiment.submit(automl_config, show_output = False) # for higher runs please use show_output=False and use the below"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -549,11 +623,11 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
"version": "3.6.5"
|
||||
},
|
||||
"name": "auto-ml-classification-local-adb",
|
||||
"notebookId": 817220787969977
|
||||
"notebookId": 587284549713154
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
"nbformat_minor": 1
|
||||
}
|
||||
@@ -99,10 +99,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"subscription_id = \"<Your SubscriptionId>\"\n",
|
||||
"resource_group = \"<Resource group - new or existing>\"\n",
|
||||
"workspace_name = \"<workspace to be created>\"\n",
|
||||
"workspace_region = \"<azureregion>\""
|
||||
"subscription_id = \"<Your SubscriptionId>\" #you should be owner or contributor\n",
|
||||
"resource_group = \"<Resource group - new or existing>\" #you should be owner or contributor\n",
|
||||
"workspace_name = \"<workspace to be created>\" #your workspace name\n",
|
||||
"workspace_region = \"<azureregion>\" #your region"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -134,7 +134,7 @@
|
||||
"ws = Workspace.create(name = workspace_name,\n",
|
||||
" subscription_id = subscription_id,\n",
|
||||
" resource_group = resource_group, \n",
|
||||
" location = workspace_region,\n",
|
||||
" location = workspace_region, \n",
|
||||
" exist_ok=True)\n",
|
||||
"ws.get_details()"
|
||||
]
|
||||
@@ -160,7 +160,8 @@
|
||||
" resource_group = resource_group)\n",
|
||||
"\n",
|
||||
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
||||
"ws.write_config()"
|
||||
"ws.write_config()\n",
|
||||
"write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -262,6 +263,66 @@
|
||||
"set_diagnostics_collection(send_diagnostics = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Registering Datastore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Datastore is the way to save connection information to a storage service (e.g. Azure Blob, Azure Data Lake, Azure SQL) information to your workspace so you can access them without exposing credentials in your code. The first thing you will need to do is register a datastore, you can refer to our [python SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) on how to register datastores. __Note: for best security practices, please do not check in code that contains registering datastores with secrets into your source control__\n",
|
||||
"\n",
|
||||
"The code below registers a datastore pointing to a publicly readable blob container."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Datastore\n",
|
||||
"\n",
|
||||
"datastore_name = 'demo_training'\n",
|
||||
"Datastore.register_azure_blob_container(\n",
|
||||
" workspace = ws, \n",
|
||||
" datastore_name = datastore_name, \n",
|
||||
" container_name = 'automl-notebook-data', \n",
|
||||
" account_name = 'dprepdata'\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Below is an example on how to register a private blob container\n",
|
||||
"```python\n",
|
||||
"datastore = Datastore.register_azure_blob_container(\n",
|
||||
" workspace = ws, \n",
|
||||
" datastore_name = 'example_datastore', \n",
|
||||
" container_name = 'example-container', \n",
|
||||
" account_name = 'storageaccount',\n",
|
||||
" account_key = 'accountkey'\n",
|
||||
")\n",
|
||||
"```\n",
|
||||
"The example below shows how to register an Azure Data Lake store. Please make sure you have granted the necessary permissions for the service principal to access the data lake.\n",
|
||||
"```python\n",
|
||||
"datastore = Datastore.register_azure_data_lake(\n",
|
||||
" workspace = ws,\n",
|
||||
" datastore_name = 'example_datastore',\n",
|
||||
" store_name = 'adlsstore',\n",
|
||||
" tenant_id = 'tenant-id-of-service-principal',\n",
|
||||
" client_id = 'client-id-of-service-principal',\n",
|
||||
" client_secret = 'client-secret-of-service-principal'\n",
|
||||
")\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -269,6 +330,24 @@
|
||||
"## Load Training Data Using DataPrep"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Automated ML takes a Dataflow as input.\n",
|
||||
"\n",
|
||||
"If you are familiar with Pandas and have done your data preparation work in Pandas already, you can use the `read_pandas_dataframe` method in dprep to convert the DataFrame to a Dataflow.\n",
|
||||
"```python\n",
|
||||
"df = pd.read_csv(...)\n",
|
||||
"# apply some transforms\n",
|
||||
"dprep.read_pandas_dataframe(df, temp_folder='/path/accessible/by/both/driver/and/worker')\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"If you just need to ingest data without doing any preparation, you can directly use AzureML Data Prep (Data Prep) to do so. The code below demonstrates this scenario. Data Prep also has data preparation capabilities, we have many [sample notebooks](https://github.com/Microsoft/AMLDataPrepDocs) demonstrating the capabilities.\n",
|
||||
"\n",
|
||||
"You will get the datastore you registered previously and pass it to Data Prep for reading. The data comes from the digits dataset: `sklearn.datasets.load_digits()`. `DataPath` points to a specific location within a datastore. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -276,15 +355,12 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.dataprep as dprep\n",
|
||||
"# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
|
||||
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
|
||||
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
|
||||
"X_train = dprep.auto_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n",
|
||||
"from azureml.data.datapath import DataPath\n",
|
||||
"\n",
|
||||
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n",
|
||||
"# and convert column types manually.\n",
|
||||
"# Here we read a comma delimited file and convert all columns to integers.\n",
|
||||
"y_train = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
|
||||
"datastore = Datastore.get(workspace = ws, name = datastore_name)\n",
|
||||
"\n",
|
||||
"X_train = dprep.read_csv(DataPath(datastore = datastore, path_on_datastore = 'X.csv')) \n",
|
||||
"y_train = dprep.read_csv(DataPath(datastore = datastore, path_on_datastore = 'y.csv')).to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -301,7 +377,16 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"X_train.skip(1).head(5)"
|
||||
"X_train.get_profile()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"y_train.get_profile()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -339,14 +424,14 @@
|
||||
" debug_log = 'automl_errors.log',\n",
|
||||
" primary_metric = 'AUC_weighted',\n",
|
||||
" iteration_timeout_minutes = 10,\n",
|
||||
" iterations = 5,\n",
|
||||
" n_cross_validations = 2,\n",
|
||||
" max_concurrent_iterations = 4, #change it based on number of worker nodes\n",
|
||||
" iterations = 30,\n",
|
||||
" preprocess = True,\n",
|
||||
" n_cross_validations = 10,\n",
|
||||
" max_concurrent_iterations = 2, #change it based on number of worker nodes\n",
|
||||
" verbosity = logging.INFO,\n",
|
||||
" spark_context=sc, #databricks/spark related\n",
|
||||
" X = X_train, \n",
|
||||
" y = y_train,\n",
|
||||
" enable_cache=False,\n",
|
||||
" path = project_folder)"
|
||||
]
|
||||
},
|
||||
@@ -356,8 +441,7 @@
|
||||
"source": [
|
||||
"## Train the Models\n",
|
||||
"\n",
|
||||
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
|
||||
"In this example, we specify `show_output = True` to print currently running iterations to the console."
|
||||
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -366,7 +450,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"local_run = experiment.submit(automl_config, show_output = True) # for higher runs please use show_output=False and use the below"
|
||||
"local_run = experiment.submit(automl_config, show_output = False) # for higher runs please use show_output=False and use the below"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -419,6 +503,7 @@
|
||||
"metricslist = {}\n",
|
||||
"for run in children:\n",
|
||||
" properties = run.get_properties()\n",
|
||||
" #print(properties)\n",
|
||||
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
|
||||
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||
"\n",
|
||||
@@ -694,11 +779,11 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
"version": "3.6.5"
|
||||
},
|
||||
"name": "auto-ml-classification-local-adb",
|
||||
"notebookId": 3888835968049288
|
||||
"notebookId": 2733885892129020
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
"nbformat_minor": 1
|
||||
}
|
||||
1
how-to-use-azureml/azure-databricks/testdata.txt
Normal file
1
how-to-use-azureml/azure-databricks/testdata.txt
Normal file
@@ -0,0 +1 @@
|
||||
Test1
|
||||
5
how-to-use-azureml/azure-databricks/train-db-dbfs.py
Normal file
5
how-to-use-azureml/azure-databricks/train-db-dbfs.py
Normal file
@@ -0,0 +1,5 @@
|
||||
# Copyright (c) Microsoft. All rights reserved.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
print("In train.py")
|
||||
print("As a data scientist, this is where I use my training code.")
|
||||
5
how-to-use-azureml/azure-databricks/train-db-local.py
Normal file
5
how-to-use-azureml/azure-databricks/train-db-local.py
Normal file
@@ -0,0 +1,5 @@
|
||||
# Copyright (c) Microsoft. All rights reserved.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
print("In train.py")
|
||||
print("As a data scientist, this is where I use my training code.")
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 24 KiB |
@@ -67,8 +67,7 @@
|
||||
"source": [
|
||||
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json\n",
|
||||
"\n",
|
||||
"If you don't have a config.json file, please go through the configuration Notebook located here:\n",
|
||||
"https://github.com/Azure/MachineLearningNotebooks. \n",
|
||||
"If you don't have a config.json file, please go through the configuration Notebook located [here](https://github.com/Azure/MachineLearningNotebooks). \n",
|
||||
"\n",
|
||||
"This sets you up with a working config file that has information on your workspace, subscription id, etc. "
|
||||
]
|
||||
@@ -80,7 +79,11 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
|
||||
"\n",
|
||||
"print('Workspace Name: ' + ws.name, \n",
|
||||
" 'Azure Region: ' + ws.location, \n",
|
||||
" 'Subscription Id: ' + ws.subscription_id, \n",
|
||||
" 'Resource Group: ' + ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -114,7 +117,8 @@
|
||||
" batch_compute = BatchCompute(ws, batch_compute_name)\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print('Attaching Batch compute...')\n",
|
||||
" provisioning_config = BatchCompute.attach_configuration(resource_group=batch_resource_group, account_name=batch_account_name)\n",
|
||||
" provisioning_config = BatchCompute.attach_configuration(resource_group=batch_resource_group, \n",
|
||||
" account_name=batch_account_name)\n",
|
||||
" batch_compute = ComputeTarget.attach(ws, batch_compute_name, provisioning_config)\n",
|
||||
" batch_compute.wait_for_completion()\n",
|
||||
" print(\"Provisioning state:{}\".format(batch_compute.provisioning_state))\n",
|
||||
@@ -127,7 +131,19 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup DataStore"
|
||||
"## Setup Datastore"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Setting up the Blob storage associated with the workspace. \n",
|
||||
"The following call retrieves the Azure Blob Store associated with your workspace. \n",
|
||||
"Note that workspaceblobstore is **the name of this store and CANNOT BE CHANGED and must be used as is**. \n",
|
||||
" \n",
|
||||
"If you want to register another Datastore, please follow the instructions from here:\n",
|
||||
"https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data#register-a-datastore"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -136,11 +152,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Blob storage associated with the workspace\n",
|
||||
"# The following call GETS the Azure Blob Store associated with your workspace.\n",
|
||||
"# Note that workspaceblobstore is **the name of this store and CANNOT BE CHANGED and must be used as is** \n",
|
||||
"default_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
|
||||
"print(\"Blobstore name: {}\".format(def_blob_store.name))"
|
||||
"datastore = Datastore(ws, \"workspaceblobstore\")\n",
|
||||
"\n",
|
||||
"print('Datastore details:')\n",
|
||||
"print('Datastore Account Name: ' + datastore.account_name)\n",
|
||||
"print('Datastore Workspace Name: ' + datastore.workspace.name)\n",
|
||||
"print('Datastore Container Name: ' + datastore.container_name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -154,7 +171,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For this example we will upload a file in the provided DataStore. These are some helper methods to achieve that."
|
||||
"For this example we will upload a file in the provided Datastore. These are some helper methods to achieve that."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -171,16 +188,16 @@
|
||||
" return temp_dir\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def upload_file_to_datastore(datastore, path, content):\n",
|
||||
" dir = create_local_file(content=content, file_name=\"temp.file\")\n",
|
||||
" datastore.upload(src_dir=dir, target_path=path, overwrite=True, show_progress=True)"
|
||||
"def upload_file_to_datastore(datastore, file_name, content):\n",
|
||||
" dir = create_local_file(content=content, file_name=file_name)\n",
|
||||
" datastore.upload(src_dir=dir, overwrite=True, show_progress=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Here we associate the input DataReference with an existing file in the provided DataStore. Feel free to upload the file of your choice manually or use the *upload_testdata* method. "
|
||||
"Here we associate the input DataReference with an existing file in the provided Datastore. Feel free to upload the file of your choice manually or use the *upload_file_to_datastore* method. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -189,14 +206,14 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"testdata_path=\"testdata.txt\"\n",
|
||||
"file_name=\"input.txt\"\n",
|
||||
"\n",
|
||||
"upload_file_to_datastore(datastore=default_blob_store, \n",
|
||||
" path=testdata_path, \n",
|
||||
" content=\"This is the content of the file\")\n",
|
||||
"upload_file_to_datastore(datastore=datastore, \n",
|
||||
" file_name=file_name, \n",
|
||||
" content=\"this is the content of the file\")\n",
|
||||
"\n",
|
||||
"testdata = DataReference(datastore=default_blob_store, \n",
|
||||
" path_on_datastore=testdata_path, \n",
|
||||
"testdata = DataReference(datastore=datastore, \n",
|
||||
" path_on_datastore=file_name, \n",
|
||||
" data_reference_name=\"input\")\n",
|
||||
"\n",
|
||||
"outputdata = PipelineData(name=\"output\", datastore=datastore)"
|
||||
@@ -224,7 +241,7 @@
|
||||
"source": [
|
||||
"binaries_folder = \"azurebatch/job_binaries\"\n",
|
||||
"if not os.path.isdir(binaries_folder):\n",
|
||||
" os.mkdir(project_folder)\n",
|
||||
" os.mkdir(binaries_folder)\n",
|
||||
"\n",
|
||||
"file_name=\"azurebatch.cmd\"\n",
|
||||
"with open(path.join(binaries_folder, file_name), 'w') as f:\n",
|
||||
|
||||
@@ -29,7 +29,8 @@
|
||||
"import os\n",
|
||||
"import shutil\n",
|
||||
"import urllib\n",
|
||||
"from azureml.core import Experiment\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core import Workspace, Experiment\n",
|
||||
"from azureml.core.datastore import Datastore\n",
|
||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||
"from azureml.exceptions import ComputeTargetException\n",
|
||||
@@ -109,7 +110,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Upload MNIST dataset to blob datastore \n",
|
||||
"A [datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data) is a place where data can be stored that is then made accessible to a Run either by means of mounting or copying the data to the compute target. A datastore can either be backed by an Azure Blob Storage or and Azure File Share (ADLS will be supported in the future). In the next step, we will use Azure Blob Storage and upload the training and test set into the Azure Blob datastore, which we will then later be mount on a Batch AI cluster for training."
|
||||
"A [datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data) is a place where data can be stored that is then made accessible to a Run either by means of mounting or copying the data to the compute target. In the next step, we will use Azure Blob Storage and upload the training and test set into the Azure Blob datastore, which we will then later be mount on a Batch AI cluster for training."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -118,7 +119,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ds = Datastore(workspace=ws, name=\"MyBlobDatastore\")\n",
|
||||
"ds = ws.get_default_datastore()\n",
|
||||
"ds.upload(src_dir='./data/mnist', target_path='mnist', overwrite=True, show_progress=True)"
|
||||
]
|
||||
},
|
||||
@@ -129,12 +130,12 @@
|
||||
"## Retrieve or create a Azure Machine Learning compute\n",
|
||||
"Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Azure Machine Learning Compute in the current workspace, if it doesn't already exist. We will then run the training script on this compute target.\n",
|
||||
"\n",
|
||||
"If we could not find the compute with the given name in the previous cell, then we will create a new compute here. We will create an Azure Machine Learning Compute containing **STANDARD_D2_V2 CPU VMs**. This process is broken down into the following steps:\n",
|
||||
"If we could not find the compute with the given name in the previous cell, then we will create a new compute here. This process is broken down into the following steps:\n",
|
||||
"\n",
|
||||
"1. Create the configuration\n",
|
||||
"2. Create the Azure Machine Learning compute\n",
|
||||
"\n",
|
||||
"**This process will take about 3 minutes and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell.**\n"
|
||||
"**This process will take a few minutes and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell.**\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -143,7 +144,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"cluster_name = \"aml-compute\"\n",
|
||||
"cluster_name = \"gpucluster\"\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
|
||||
@@ -320,7 +321,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Build the experiment"
|
||||
"### Run the pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -329,31 +330,15 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline = Pipeline(workspace=ws, steps=[hd_step])"
|
||||
"pipeline = Pipeline(workspace=ws, steps=[hd_step])\n",
|
||||
"pipeline_run = Experiment(ws, 'Hyperdrive_Test').submit(pipeline)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Submit the experiment "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline_run = Experiment(ws, 'Hyperdrive_Test').submit(pipeline)\n",
|
||||
"pipeline_run.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### View Run Details"
|
||||
"### Monitor using widget"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -365,6 +350,22 @@
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(pipeline_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Wait for the completion of this Pipeline run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline_run.wait_for_completion()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -77,7 +77,7 @@
|
||||
"source": [
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"aml_compute_target = \"aml-compute\"\n",
|
||||
"aml_compute_target = \"cpucluster\"\n",
|
||||
"try:\n",
|
||||
" aml_compute = AmlCompute(ws, aml_compute_target)\n",
|
||||
" print(\"found existing compute target.\")\n",
|
||||
@@ -280,7 +280,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Publish the pipeline"
|
||||
"## Run published pipeline\n",
|
||||
"### Publish the pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -290,7 +291,34 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"published_pipeline1 = pipeline1.publish(name=\"My_New_Pipeline\", description=\"My Published Pipeline Description\")\n",
|
||||
"print(published_pipeline1.id)"
|
||||
"published_pipeline1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Get published pipeline\n",
|
||||
"\n",
|
||||
"You can get the published pipeline using **pipeline id**.\n",
|
||||
"\n",
|
||||
"To get all the published pipelines for a given workspace(ws): \n",
|
||||
"```css\n",
|
||||
"all_pub_pipelines = PublishedPipeline.get_all(ws)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core import PublishedPipeline\n",
|
||||
"\n",
|
||||
"pipeline_id = published_pipeline1.id # use your published pipeline id\n",
|
||||
"published_pipeline = PublishedPipeline.get(ws, pipeline_id)\n",
|
||||
"published_pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -312,9 +340,9 @@
|
||||
"auth = InteractiveLoginAuthentication()\n",
|
||||
"aad_token = auth.get_authentication_header()\n",
|
||||
"\n",
|
||||
"rest_endpoint1 = published_pipeline1.endpoint\n",
|
||||
"rest_endpoint1 = published_pipeline.endpoint\n",
|
||||
"\n",
|
||||
"print(rest_endpoint1)\n",
|
||||
"print(\"You can perform HTTP POST on URL {} to trigger this pipeline\".format(rest_endpoint1))\n",
|
||||
"\n",
|
||||
"# specify the param when running the pipeline\n",
|
||||
"response = requests.post(rest_endpoint1, \n",
|
||||
|
||||
@@ -204,7 +204,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a schedule for the pipeline"
|
||||
"### Create a schedule for the pipeline using a recurrence\n",
|
||||
"This schedule will run on a specified recurrence interval."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -345,7 +346,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Change reccurence of the schedule"
|
||||
"### Change recurrence of the schedule"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -366,13 +367,58 @@
|
||||
" wait_for_provisioning=True,\n",
|
||||
" recurrence=recurrence)\n",
|
||||
"\n",
|
||||
"fetched_schedule = Schedule.get_schedule(ws, fetched_schedule.id)\n",
|
||||
"fetched_schedule = Schedule.get(ws, fetched_schedule.id)\n",
|
||||
"\n",
|
||||
"print(\"Updated schedule:\", fetched_schedule.id, \n",
|
||||
" \"\\nNew name:\", fetched_schedule.name,\n",
|
||||
" \"\\nNew frequency:\", fetched_schedule.recurrence.frequency,\n",
|
||||
" \"\\nNew status:\", fetched_schedule.status)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a schedule for the pipeline using a Datastore\n",
|
||||
"This schedule will run when additions or modifications are made to Blobs in the Datastore container.\n",
|
||||
"Note: Only Blob Datastores are supported."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.datastore import Datastore\n",
|
||||
"\n",
|
||||
"datastore = Datastore(workspace=ws, name=\"workspaceblobstore\")\n",
|
||||
"\n",
|
||||
"schedule = Schedule.create(workspace=ws, name=\"My_Schedule\",\n",
|
||||
" pipeline_id=pub_pipeline_id, \n",
|
||||
" experiment_name='Schedule_Run',\n",
|
||||
" datastore=datastore,\n",
|
||||
" wait_for_provisioning=True,\n",
|
||||
" description=\"Schedule Run\")\n",
|
||||
"\n",
|
||||
"# You may want to make sure that the schedule is provisioned properly\n",
|
||||
"# before making any further changes to the schedule\n",
|
||||
"\n",
|
||||
"print(\"Created schedule with id: {}\".format(schedule.id))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
|
||||
"# for the call to provision the schedule in the backend.\n",
|
||||
"schedule.disable(wait_for_provisioning=True)\n",
|
||||
"schedule = Schedule.get(ws, schedule_id)\n",
|
||||
"print(\"Disabled schedule {}. New status is: {}\".format(schedule.id, schedule.status))"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -165,7 +165,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# choose a name for your cluster\n",
|
||||
"aml_compute_name = os.environ.get(\"AML_COMPUTE_NAME\", \"gpu-cluster\")\n",
|
||||
"aml_compute_name = os.environ.get(\"AML_COMPUTE_NAME\", \"gpucluster\")\n",
|
||||
"cluster_min_nodes = os.environ.get(\"AML_COMPUTE_MIN_NODES\", 0)\n",
|
||||
"cluster_max_nodes = os.environ.get(\"AML_COMPUTE_MAX_NODES\", 1)\n",
|
||||
"vm_size = os.environ.get(\"AML_COMPUTE_SKU\", \"STANDARD_NC6\")\n",
|
||||
@@ -466,7 +466,35 @@
|
||||
"published_pipeline = pipeline_run.publish_pipeline(\n",
|
||||
" name=\"Inception_v3_scoring\", description=\"Batch scoring using Inception v3 model\", version=\"1.0\")\n",
|
||||
"\n",
|
||||
"published_id = published_pipeline.id"
|
||||
"published_pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Get published pipeline\n",
|
||||
"\n",
|
||||
"You can get the published pipeline using **pipeline id**.\n",
|
||||
"\n",
|
||||
"To get all the published pipelines for a given workspace(ws): \n",
|
||||
"```css\n",
|
||||
"all_pub_pipelines = PublishedPipeline.get_all(ws)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core import PublishedPipeline\n",
|
||||
"\n",
|
||||
"pipeline_id = published_pipeline.id # use your published pipeline id\n",
|
||||
"published_pipeline = PublishedPipeline.get(ws, pipeline_id)\n",
|
||||
"\n",
|
||||
"published_pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -120,7 +120,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Python Scripts\n",
|
||||
"We use an edited version of `neural_style_mpi.py` (original is [here](https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style_mpi.py)). Scripts to split and stitch the video are thin wrappers to calls to `ffmpeg`. \n",
|
||||
"We use an edited version of `neural_style_mpi.py` (original is [here](https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py)). Scripts to split and stitch the video are thin wrappers to calls to `ffmpeg`. \n",
|
||||
"\n",
|
||||
"We install `ffmpeg` through conda dependencies."
|
||||
]
|
||||
@@ -201,6 +201,13 @@
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The sample video **organutan.mp4** is stored at a publicly shared datastore. We are registering the datastore below. If you want to take a look at the original video, click here. (https://pipelinedata.blob.core.windows.net/sample-videos/orangutan.mp4)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -208,8 +215,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# datastore for input video\n",
|
||||
"account_name = \"happypathspublic\"\n",
|
||||
"video_ds = Datastore.register_azure_blob_container(ws, \"videos\", \"videos\",\n",
|
||||
"account_name = \"pipelinedata\"\n",
|
||||
"video_ds = Datastore.register_azure_blob_container(ws, \"videos\", \"sample-videos\",\n",
|
||||
" account_name=account_name, overwrite=True)\n",
|
||||
"\n",
|
||||
"# datastore for models\n",
|
||||
@@ -238,9 +245,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"video_name=os.getenv(\"STYLE_TRANSFER_VIDEO_NAME\", \"orangutan.mp4\") \n",
|
||||
"orangutan_video = DataReference(datastore=video_ds,\n",
|
||||
" data_reference_name=\"video\",\n",
|
||||
" path_on_datastore=\"orangutan.mp4\", mode=\"download\")"
|
||||
" path_on_datastore=video_name, mode=\"download\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -442,7 +450,35 @@
|
||||
"published_pipeline = pipeline_run.publish_pipeline(\n",
|
||||
" name=\"batch score style transfer\", description=\"style transfer\", version=\"1.0\")\n",
|
||||
"\n",
|
||||
"published_id = published_pipeline.id"
|
||||
"published_pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get published pipeline\n",
|
||||
"\n",
|
||||
"You can get the published pipeline using **pipeline id**.\n",
|
||||
"\n",
|
||||
"To get all the published pipelines for a given workspace(ws): \n",
|
||||
"```css\n",
|
||||
"all_pub_pipelines = PublishedPipeline.get_all(ws)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core import PublishedPipeline\n",
|
||||
"\n",
|
||||
"pipeline_id = published_pipeline.id # use your published pipeline id\n",
|
||||
"published_pipeline = PublishedPipeline.get(ws, pipeline_id)\n",
|
||||
"\n",
|
||||
"published_pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -542,7 +578,7 @@
|
||||
"response = requests.post(rest_endpoint, \n",
|
||||
" headers=aad_token,\n",
|
||||
" json={\"ExperimentName\": \"style_transfer\",\n",
|
||||
" \"ParameterAssignments\": {\"style\": \"udnie\", \"nodecount\": 4}}) \n",
|
||||
" \"ParameterAssignments\": {\"style\": \"udnie\", \"nodecount\": 3}}) \n",
|
||||
"run_id = response.json()[\"Id\"]\n",
|
||||
"\n",
|
||||
"published_pipeline_run_udnie = PipelineRun(ws.experiments[\"style_transfer\"], run_id)\n",
|
||||
|
||||
@@ -209,8 +209,8 @@
|
||||
"\n",
|
||||
"svc_pr = ServicePrincipalAuthentication(\n",
|
||||
" tenant_id=\"my-tenant-id\",\n",
|
||||
" username=\"my-application-id\",\n",
|
||||
" password=svc_pr_password)\n",
|
||||
" service_principal_id=\"my-application-id\",\n",
|
||||
" service_principal_password=svc_pr_password)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"ws = Workspace(\n",
|
||||
|
||||
@@ -5,11 +5,14 @@ These examples show you:
|
||||
1. [How to use the Estimator pattern in Azure ML](how-to-use-estimator)
|
||||
2. [Train using TensorFlow Estimator and tune hyperparameters using Hyperdrive](train-hyperparameter-tune-deploy-with-tensorflow)
|
||||
3. [Train using Pytorch Estimator and tune hyperparameters using Hyperdrive](train-hyperparameter-tune-deploy-with-pytorch)
|
||||
4. [Distributed training using TensorFlow and Parameter Server](distributed-tensorflow-with-parameter-server)
|
||||
5. [Distributed training using TensorFlow and Horovod](distributed-tensorflow-with-horovod)
|
||||
6. [Distributed training using Pytorch and Horovod](distributed-pytorch-with-horovod)
|
||||
7. [Distributed training using CNTK and custom Docker image](distributed-cntk-with-custom-docker)
|
||||
8. [Export run history records to Tensorboard](export-run-history-to-tensorboard)
|
||||
9. [Use TensorBoard to monitor training execution](tensorboard)
|
||||
4. [Train using Keras and tune hyperparameters using Hyperdrive](train-hyperparameter-tune-deploy-with-keras)
|
||||
5. [Train using Chainer Estimator and tune hyperparameters using Hyperdrive](train-hyperparameter-tune-deploy-with-chainer)
|
||||
6. [Distributed training using TensorFlow and Parameter Server](distributed-tensorflow-with-parameter-server)
|
||||
7. [Distributed training using TensorFlow and Horovod](distributed-tensorflow-with-horovod)
|
||||
8. [Distributed training using Pytorch and Horovod](distributed-pytorch-with-horovod)
|
||||
9. [Distributed training using CNTK and custom Docker image](distributed-cntk-with-custom-docker)
|
||||
10. [Distributed training using Chainer](distributed-chainer)
|
||||
11. [Export run history records to Tensorboard](export-run-history-to-tensorboard)
|
||||
12. [Use TensorBoard to monitor training execution](tensorboard)
|
||||
|
||||
Learn more about how to use `Estimator` class to [train deep neural networks with Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-ml-models).
|
||||
|
||||
@@ -0,0 +1,153 @@
|
||||
|
||||
import argparse
|
||||
|
||||
import chainer
|
||||
import chainer.cuda
|
||||
import chainer.functions as F
|
||||
import chainer.links as L
|
||||
from chainer import training
|
||||
from chainer.training import extensions
|
||||
|
||||
import chainermn
|
||||
import chainermn.datasets
|
||||
import chainermn.functions
|
||||
|
||||
|
||||
chainer.disable_experimental_feature_warning = True
|
||||
|
||||
|
||||
class MLP0SubA(chainer.Chain):
|
||||
def __init__(self, comm, n_out):
|
||||
super(MLP0SubA, self).__init__(
|
||||
l1=L.Linear(784, n_out))
|
||||
|
||||
def __call__(self, x):
|
||||
return F.relu(self.l1(x))
|
||||
|
||||
|
||||
class MLP0SubB(chainer.Chain):
|
||||
def __init__(self, comm):
|
||||
super(MLP0SubB, self).__init__()
|
||||
|
||||
def __call__(self, y):
|
||||
return y
|
||||
|
||||
|
||||
class MLP0(chainermn.MultiNodeChainList):
|
||||
# Model on worker 0.
|
||||
def __init__(self, comm, n_out):
|
||||
super(MLP0, self).__init__(comm=comm)
|
||||
self.add_link(MLP0SubA(comm, n_out), rank_in=None, rank_out=1)
|
||||
self.add_link(MLP0SubB(comm), rank_in=1, rank_out=None)
|
||||
|
||||
|
||||
class MLP1Sub(chainer.Chain):
|
||||
def __init__(self, n_units, n_out):
|
||||
super(MLP1Sub, self).__init__(
|
||||
l2=L.Linear(None, n_units),
|
||||
l3=L.Linear(None, n_out))
|
||||
|
||||
def __call__(self, h0):
|
||||
h1 = F.relu(self.l2(h0))
|
||||
return self.l3(h1)
|
||||
|
||||
|
||||
class MLP1(chainermn.MultiNodeChainList):
|
||||
# Model on worker 1.
|
||||
def __init__(self, comm, n_units, n_out):
|
||||
super(MLP1, self).__init__(comm=comm)
|
||||
self.add_link(MLP1Sub(n_units, n_out), rank_in=0, rank_out=0)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='ChainerMN example: pipelined neural network')
|
||||
parser.add_argument('--batchsize', '-b', type=int, default=100,
|
||||
help='Number of images in each mini-batch')
|
||||
parser.add_argument('--epoch', '-e', type=int, default=20,
|
||||
help='Number of sweeps over the dataset to train')
|
||||
parser.add_argument('--gpu', '-g', action='store_true',
|
||||
help='Use GPU')
|
||||
parser.add_argument('--out', '-o', default='result',
|
||||
help='Directory to output the result')
|
||||
parser.add_argument('--unit', '-u', type=int, default=1000,
|
||||
help='Number of units')
|
||||
args = parser.parse_args()
|
||||
|
||||
# Prepare ChainerMN communicator.
|
||||
if args.gpu:
|
||||
comm = chainermn.create_communicator('hierarchical')
|
||||
data_axis, model_axis = comm.rank % 2, comm.rank // 2
|
||||
data_comm = comm.split(data_axis, comm.rank)
|
||||
model_comm = comm.split(model_axis, comm.rank)
|
||||
device = comm.intra_rank
|
||||
else:
|
||||
comm = chainermn.create_communicator('naive')
|
||||
data_axis, model_axis = comm.rank % 2, comm.rank // 2
|
||||
data_comm = comm.split(data_axis, comm.rank)
|
||||
model_comm = comm.split(model_axis, comm.rank)
|
||||
device = -1
|
||||
|
||||
if model_comm.size != 2:
|
||||
raise ValueError(
|
||||
'This example can only be executed on the even number'
|
||||
'of processes.')
|
||||
|
||||
if comm.rank == 0:
|
||||
print('==========================================')
|
||||
if args.gpu:
|
||||
print('Using GPUs')
|
||||
print('Num unit: {}'.format(args.unit))
|
||||
print('Num Minibatch-size: {}'.format(args.batchsize))
|
||||
print('Num epoch: {}'.format(args.epoch))
|
||||
print('==========================================')
|
||||
|
||||
if data_axis == 0:
|
||||
model = L.Classifier(MLP0(model_comm, args.unit))
|
||||
elif data_axis == 1:
|
||||
model = MLP1(model_comm, args.unit, 10)
|
||||
|
||||
if device >= 0:
|
||||
chainer.cuda.get_device_from_id(device).use()
|
||||
model.to_gpu()
|
||||
|
||||
optimizer = chainermn.create_multi_node_optimizer(
|
||||
chainer.optimizers.Adam(), data_comm)
|
||||
optimizer.setup(model)
|
||||
|
||||
# Original dataset on worker 0 and 1.
|
||||
# Datasets of worker 0 and 1 are split and distributed to all workers.
|
||||
if model_axis == 0:
|
||||
train, test = chainer.datasets.get_mnist()
|
||||
if data_axis == 1:
|
||||
train = chainermn.datasets.create_empty_dataset(train)
|
||||
test = chainermn.datasets.create_empty_dataset(test)
|
||||
else:
|
||||
train, test = None, None
|
||||
train = chainermn.scatter_dataset(train, data_comm, shuffle=True)
|
||||
test = chainermn.scatter_dataset(test, data_comm, shuffle=True)
|
||||
|
||||
train_iter = chainer.iterators.SerialIterator(
|
||||
train, args.batchsize, shuffle=False)
|
||||
test_iter = chainer.iterators.SerialIterator(
|
||||
test, args.batchsize, repeat=False, shuffle=False)
|
||||
|
||||
updater = training.StandardUpdater(train_iter, optimizer, device=device)
|
||||
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
|
||||
evaluator = extensions.Evaluator(test_iter, model, device=device)
|
||||
evaluator = chainermn.create_multi_node_evaluator(evaluator, data_comm)
|
||||
trainer.extend(evaluator)
|
||||
|
||||
# Some display and output extentions are necessary only for worker 0.
|
||||
if comm.rank == 0:
|
||||
trainer.extend(extensions.LogReport())
|
||||
trainer.extend(extensions.PrintReport(
|
||||
['epoch', 'main/loss', 'validation/main/loss',
|
||||
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
|
||||
trainer.extend(extensions.ProgressBar())
|
||||
|
||||
trainer.run()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -0,0 +1,315 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Distributed Chainer\n",
|
||||
"In this tutorial, you will run a Chainer training example on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using ChainerMN distributed training across a GPU cluster."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Diagnostics\n",
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"Diagnostics"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"\n",
|
||||
"set_diagnostics_collection(send_diagnostics=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize workspace\n",
|
||||
"\n",
|
||||
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create or attach existing AmlCompute\n",
|
||||
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource. Specifically, the below code creates an `STANDARD_NC6` GPU cluster that autoscales from `0` to `4` nodes.\n",
|
||||
"\n",
|
||||
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace, this code will skip the creation process.\n",
|
||||
"\n",
|
||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# choose a name for your cluster\n",
|
||||
"cluster_name = \"gpucluster\"\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
|
||||
" print('Found existing compute target.')\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print('Creating a new compute target...')\n",
|
||||
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',\n",
|
||||
" max_nodes=4)\n",
|
||||
"\n",
|
||||
" # create the cluster\n",
|
||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||
"\n",
|
||||
" compute_target.wait_for_completion(show_output=True)\n",
|
||||
"\n",
|
||||
"# use get_status() to get a detailed status for the current AmlCompute. \n",
|
||||
"print(compute_target.get_status().serialize())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above code creates GPU compute. If you instead want to create CPU compute, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train model on the remote compute\n",
|
||||
"Now that we have the AmlCompute ready to go, let's run our distributed training job."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a project directory\n",
|
||||
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"project_folder = './chainer-distr'\n",
|
||||
"os.makedirs(project_folder, exist_ok=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Prepare training script\n",
|
||||
"Now you will need to create your training script. In this tutorial, the script for distributed training of MNIST is already provided for you at `train_mnist.py`. In practice, you should be able to take any custom Chainer training script as is and run it with Azure ML without having to modify your code."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Once your script is ready, copy the training script `train_mnist.py` into the project directory."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import shutil\n",
|
||||
"\n",
|
||||
"shutil.copy('train_mnist.py', project_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create an experiment\n",
|
||||
"Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this distributed Chainer tutorial. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Experiment\n",
|
||||
"\n",
|
||||
"experiment_name = 'chainer-distr'\n",
|
||||
"experiment = Experiment(ws, name=experiment_name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a Chainer estimator\n",
|
||||
"The Azure ML SDK's Chainer estimator enables you to easily submit Chainer training jobs for both single-node and distributed runs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.train.dnn import Chainer\n",
|
||||
"\n",
|
||||
"estimator = Chainer(source_directory=project_folder,\n",
|
||||
" compute_target=compute_target,\n",
|
||||
" entry_script='train_mnist.py',\n",
|
||||
" node_count=2,\n",
|
||||
" process_count_per_node=1,\n",
|
||||
" distributed_backend='mpi',\n",
|
||||
" use_gpu=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, Chainer and its dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `Chainer` constructor's `pip_packages` or `conda_packages` parameters."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Submit job\n",
|
||||
"Run your experiment by submitting your estimator object. Note that this call is asynchronous."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run = experiment.submit(estimator)\n",
|
||||
"print(run)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Monitor your run\n",
|
||||
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes. You can see that the widget automatically plots and visualizes the loss metric that we logged to the Azure ML run."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"\n",
|
||||
"RunDetails(run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "minxia"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
},
|
||||
"msauthor": "minxia"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -0,0 +1,125 @@
|
||||
# Official ChainerMN example taken from
|
||||
# https://github.com/chainer/chainer/blob/master/examples/chainermn/mnist/train_mnist.py
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
|
||||
import chainer
|
||||
import chainer.functions as F
|
||||
import chainer.links as L
|
||||
from chainer import training
|
||||
from chainer.training import extensions
|
||||
|
||||
import chainermn
|
||||
|
||||
|
||||
class MLP(chainer.Chain):
|
||||
|
||||
def __init__(self, n_units, n_out):
|
||||
super(MLP, self).__init__(
|
||||
# the size of the inputs to each layer will be inferred
|
||||
l1=L.Linear(784, n_units), # n_in -> n_units
|
||||
l2=L.Linear(n_units, n_units), # n_units -> n_units
|
||||
l3=L.Linear(n_units, n_out), # n_units -> n_out
|
||||
)
|
||||
|
||||
def __call__(self, x):
|
||||
h1 = F.relu(self.l1(x))
|
||||
h2 = F.relu(self.l2(h1))
|
||||
return self.l3(h2)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='ChainerMN example: MNIST')
|
||||
parser.add_argument('--batchsize', '-b', type=int, default=100,
|
||||
help='Number of images in each mini-batch')
|
||||
parser.add_argument('--communicator', type=str,
|
||||
default='non_cuda_aware', help='Type of communicator')
|
||||
parser.add_argument('--epoch', '-e', type=int, default=20,
|
||||
help='Number of sweeps over the dataset to train')
|
||||
parser.add_argument('--gpu', '-g', default=True,
|
||||
help='Use GPU')
|
||||
parser.add_argument('--out', '-o', default='result',
|
||||
help='Directory to output the result')
|
||||
parser.add_argument('--resume', '-r', default='',
|
||||
help='Resume the training from snapshot')
|
||||
parser.add_argument('--unit', '-u', type=int, default=1000,
|
||||
help='Number of units')
|
||||
args = parser.parse_args()
|
||||
|
||||
# Prepare ChainerMN communicator.
|
||||
|
||||
if args.gpu:
|
||||
if args.communicator == 'naive':
|
||||
print("Error: 'naive' communicator does not support GPU.\n")
|
||||
exit(-1)
|
||||
comm = chainermn.create_communicator(args.communicator)
|
||||
device = comm.intra_rank
|
||||
else:
|
||||
if args.communicator != 'naive':
|
||||
print('Warning: using naive communicator '
|
||||
'because only naive supports CPU-only execution')
|
||||
comm = chainermn.create_communicator('naive')
|
||||
device = -1
|
||||
|
||||
if comm.rank == 0:
|
||||
print('==========================================')
|
||||
print('Num process (COMM_WORLD): {}'.format(comm.size))
|
||||
if args.gpu:
|
||||
print('Using GPUs')
|
||||
print('Using {} communicator'.format(args.communicator))
|
||||
print('Num unit: {}'.format(args.unit))
|
||||
print('Num Minibatch-size: {}'.format(args.batchsize))
|
||||
print('Num epoch: {}'.format(args.epoch))
|
||||
print('==========================================')
|
||||
|
||||
model = L.Classifier(MLP(args.unit, 10))
|
||||
if device >= 0:
|
||||
chainer.cuda.get_device_from_id(device).use()
|
||||
model.to_gpu()
|
||||
|
||||
# Create a multi node optimizer from a standard Chainer optimizer.
|
||||
optimizer = chainermn.create_multi_node_optimizer(
|
||||
chainer.optimizers.Adam(), comm)
|
||||
optimizer.setup(model)
|
||||
|
||||
# Split and distribute the dataset. Only worker 0 loads the whole dataset.
|
||||
# Datasets of worker 0 are evenly split and distributed to all workers.
|
||||
if comm.rank == 0:
|
||||
train, test = chainer.datasets.get_mnist()
|
||||
else:
|
||||
train, test = None, None
|
||||
train = chainermn.scatter_dataset(train, comm, shuffle=True)
|
||||
test = chainermn.scatter_dataset(test, comm, shuffle=True)
|
||||
|
||||
train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
|
||||
test_iter = chainer.iterators.SerialIterator(test, args.batchsize,
|
||||
repeat=False, shuffle=False)
|
||||
|
||||
updater = training.StandardUpdater(train_iter, optimizer, device=device)
|
||||
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
|
||||
|
||||
# Create a multi node evaluator from a standard Chainer evaluator.
|
||||
evaluator = extensions.Evaluator(test_iter, model, device=device)
|
||||
evaluator = chainermn.create_multi_node_evaluator(evaluator, comm)
|
||||
trainer.extend(evaluator)
|
||||
|
||||
# Some display and output extensions are necessary only for one worker.
|
||||
# (Otherwise, there would just be repeated outputs.)
|
||||
if comm.rank == 0:
|
||||
trainer.extend(extensions.dump_graph('main/loss'))
|
||||
trainer.extend(extensions.LogReport())
|
||||
trainer.extend(extensions.PrintReport(
|
||||
['epoch', 'main/loss', 'validation/main/loss',
|
||||
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
|
||||
trainer.extend(extensions.ProgressBar())
|
||||
|
||||
if args.resume:
|
||||
chainer.serializers.load_npz(args.resume, trainer)
|
||||
|
||||
trainer.run()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -56,7 +56,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install azureml-contrib-tensorboard"
|
||||
"!pip install azureml-tensorboard"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -166,7 +166,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Export Run History to Tensorboard logs\n",
|
||||
"from azureml.contrib.tensorboard.export import export_to_tensorboard\n",
|
||||
"from azureml.tensorboard.export import export_to_tensorboard\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"logdir = 'exportedTBlogs'\n",
|
||||
@@ -208,7 +208,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.contrib.tensorboard import Tensorboard\n",
|
||||
"from azureml.tensorboard import Tensorboard\n",
|
||||
"\n",
|
||||
"# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here\n",
|
||||
"tb = Tensorboard([], local_root=logdir, port=6006)\n",
|
||||
|
||||
@@ -57,7 +57,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install azureml-contrib-tensorboard"
|
||||
"!pip install azureml-tensorboard"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -239,7 +239,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.contrib.tensorboard import Tensorboard\n",
|
||||
"from azureml.tensorboard import Tensorboard\n",
|
||||
"\n",
|
||||
"# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here\n",
|
||||
"tb = Tensorboard([run])\n",
|
||||
@@ -293,7 +293,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import RemoteCompute\n",
|
||||
"from azureml.core.compute import ComputeTarget, RemoteCompute\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')\n",
|
||||
@@ -305,12 +305,11 @@
|
||||
" attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)\n",
|
||||
" print('found existing:', attached_dsvm_compute.name)\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" attached_dsvm_compute = RemoteCompute.attach(workspace=ws,\n",
|
||||
" name=compute_target_name,\n",
|
||||
" username=username,\n",
|
||||
" address=address,\n",
|
||||
" ssh_port=22,\n",
|
||||
" private_key_file='./.ssh/id_rsa')\n",
|
||||
" config = RemoteCompute.attach_configuration(username=username,\n",
|
||||
" address=address,\n",
|
||||
" ssh_port=22,\n",
|
||||
" private_key_file='./.ssh/id_rsa')\n",
|
||||
" attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, config)\n",
|
||||
" \n",
|
||||
" attached_dsvm_compute.wait_for_completion(show_output=True)"
|
||||
]
|
||||
@@ -407,10 +406,13 @@
|
||||
"# choose a name for your cluster\n",
|
||||
"cluster_name = \"cpucluster\"\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
|
||||
" print('Found existing compute target.')\n",
|
||||
"except ComputeTargetException:\n",
|
||||
"cts = ws.compute_targets\n",
|
||||
"found = False\n",
|
||||
"if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':\n",
|
||||
" found = True\n",
|
||||
" print('Found existing compute target.')\n",
|
||||
" compute_target = cts[cluster_name]\n",
|
||||
"if not found:\n",
|
||||
" print('Creating a new compute target...')\n",
|
||||
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', \n",
|
||||
" max_nodes=4)\n",
|
||||
@@ -418,10 +420,10 @@
|
||||
" # create the cluster\n",
|
||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||
"\n",
|
||||
"compute_target.wait_for_completion(show_output=True, min_node_count=1, timeout_in_minutes=20)\n",
|
||||
"compute_target.wait_for_completion(show_output=True, min_node_count=None)\n",
|
||||
"\n",
|
||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||
"print(compute_target.get_status().serialize())"
|
||||
"# print(compute_target.get_status().serialize())"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -0,0 +1,136 @@
|
||||
|
||||
import argparse
|
||||
|
||||
import numpy as np
|
||||
|
||||
import chainer
|
||||
from chainer import backend
|
||||
from chainer import backends
|
||||
from chainer.backends import cuda
|
||||
from chainer import Function, gradient_check, report, training, utils, Variable
|
||||
from chainer import datasets, iterators, optimizers, serializers
|
||||
from chainer import Link, Chain, ChainList
|
||||
import chainer.functions as F
|
||||
import chainer.links as L
|
||||
from chainer.training import extensions
|
||||
from chainer.dataset import concat_examples
|
||||
from chainer.backends.cuda import to_cpu
|
||||
|
||||
from azureml.core.run import Run
|
||||
run = Run.get_context()
|
||||
|
||||
|
||||
class MyNetwork(Chain):
|
||||
|
||||
def __init__(self, n_mid_units=100, n_out=10):
|
||||
super(MyNetwork, self).__init__()
|
||||
with self.init_scope():
|
||||
self.l1 = L.Linear(None, n_mid_units)
|
||||
self.l2 = L.Linear(n_mid_units, n_mid_units)
|
||||
self.l3 = L.Linear(n_mid_units, n_out)
|
||||
|
||||
def forward(self, x):
|
||||
h = F.relu(self.l1(x))
|
||||
h = F.relu(self.l2(h))
|
||||
return self.l3(h)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
|
||||
parser.add_argument('--batchsize', '-b', type=int, default=100,
|
||||
help='Number of images in each mini-batch')
|
||||
parser.add_argument('--epochs', '-e', type=int, default=20,
|
||||
help='Number of sweeps over the dataset to train')
|
||||
parser.add_argument('--output_dir', '-o', default='./outputs',
|
||||
help='Directory to output the result')
|
||||
parser.add_argument('--gpu_id', '-g', default=0,
|
||||
help='ID of the GPU to be used. Set to -1 if you use CPU')
|
||||
args = parser.parse_args()
|
||||
|
||||
# Download the MNIST data if you haven't downloaded it yet
|
||||
train, test = datasets.mnist.get_mnist(withlabel=True, ndim=1)
|
||||
|
||||
gpu_id = args.gpu_id
|
||||
batchsize = args.batchsize
|
||||
epochs = args.epochs
|
||||
run.log('Batch size', np.int(batchsize))
|
||||
run.log('Epochs', np.int(epochs))
|
||||
|
||||
train_iter = iterators.SerialIterator(train, batchsize)
|
||||
test_iter = iterators.SerialIterator(test, batchsize,
|
||||
repeat=False, shuffle=False)
|
||||
|
||||
model = MyNetwork()
|
||||
|
||||
if gpu_id >= 0:
|
||||
# Make a specified GPU current
|
||||
chainer.backends.cuda.get_device_from_id(0).use()
|
||||
model.to_gpu() # Copy the model to the GPU
|
||||
|
||||
# Choose an optimizer algorithm
|
||||
optimizer = optimizers.MomentumSGD(lr=0.01, momentum=0.9)
|
||||
|
||||
# Give the optimizer a reference to the model so that it
|
||||
# can locate the model's parameters.
|
||||
optimizer.setup(model)
|
||||
|
||||
while train_iter.epoch < epochs:
|
||||
# ---------- One iteration of the training loop ----------
|
||||
train_batch = train_iter.next()
|
||||
image_train, target_train = concat_examples(train_batch, gpu_id)
|
||||
|
||||
# Calculate the prediction of the network
|
||||
prediction_train = model(image_train)
|
||||
|
||||
# Calculate the loss with softmax_cross_entropy
|
||||
loss = F.softmax_cross_entropy(prediction_train, target_train)
|
||||
|
||||
# Calculate the gradients in the network
|
||||
model.cleargrads()
|
||||
loss.backward()
|
||||
|
||||
# Update all the trainable parameters
|
||||
optimizer.update()
|
||||
# --------------------- until here ---------------------
|
||||
|
||||
# Check the validation accuracy of prediction after every epoch
|
||||
if train_iter.is_new_epoch: # If this iteration is the final iteration of the current epoch
|
||||
|
||||
# Display the training loss
|
||||
print('epoch:{:02d} train_loss:{:.04f} '.format(
|
||||
train_iter.epoch, float(to_cpu(loss.array))), end='')
|
||||
|
||||
test_losses = []
|
||||
test_accuracies = []
|
||||
while True:
|
||||
test_batch = test_iter.next()
|
||||
image_test, target_test = concat_examples(test_batch, gpu_id)
|
||||
|
||||
# Forward the test data
|
||||
prediction_test = model(image_test)
|
||||
|
||||
# Calculate the loss
|
||||
loss_test = F.softmax_cross_entropy(prediction_test, target_test)
|
||||
test_losses.append(to_cpu(loss_test.array))
|
||||
|
||||
# Calculate the accuracy
|
||||
accuracy = F.accuracy(prediction_test, target_test)
|
||||
accuracy.to_cpu()
|
||||
test_accuracies.append(accuracy.array)
|
||||
|
||||
if test_iter.is_new_epoch:
|
||||
test_iter.epoch = 0
|
||||
test_iter.current_position = 0
|
||||
test_iter.is_new_epoch = False
|
||||
test_iter._pushed_position = None
|
||||
break
|
||||
|
||||
val_accuracy = np.mean(test_accuracies)
|
||||
print('val_loss:{:.04f} val_accuracy:{:.04f}'.format(
|
||||
np.mean(test_losses), val_accuracy))
|
||||
|
||||
run.log("Accuracy", np.float(val_accuracy))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -0,0 +1,134 @@
|
||||
|
||||
import argparse
|
||||
|
||||
import numpy as np
|
||||
|
||||
import chainer
|
||||
from chainer import backend
|
||||
from chainer import backends
|
||||
from chainer.backends import cuda
|
||||
from chainer import Function, gradient_check, report, training, utils, Variable
|
||||
from chainer import datasets, iterators, optimizers, serializers
|
||||
from chainer import Link, Chain, ChainList
|
||||
import chainer.functions as F
|
||||
import chainer.links as L
|
||||
from chainer.training import extensions
|
||||
from chainer.dataset import concat_examples
|
||||
from chainer.backends.cuda import to_cpu
|
||||
|
||||
from azureml.core.run import Run
|
||||
run = Run.get_context()
|
||||
|
||||
|
||||
class MyNetwork(Chain):
|
||||
|
||||
def __init__(self, n_mid_units=100, n_out=10):
|
||||
super(MyNetwork, self).__init__()
|
||||
with self.init_scope():
|
||||
self.l1 = L.Linear(None, n_mid_units)
|
||||
self.l2 = L.Linear(n_mid_units, n_mid_units)
|
||||
self.l3 = L.Linear(n_mid_units, n_out)
|
||||
|
||||
def forward(self, x):
|
||||
h = F.relu(self.l1(x))
|
||||
h = F.relu(self.l2(h))
|
||||
return self.l3(h)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
|
||||
parser.add_argument('--batchsize', '-b', type=int, default=100,
|
||||
help='Number of images in each mini-batch')
|
||||
parser.add_argument('--epochs', '-e', type=int, default=20,
|
||||
help='Number of sweeps over the dataset to train')
|
||||
parser.add_argument('--output_dir', '-o', default='./outputs',
|
||||
help='Directory to output the result')
|
||||
args = parser.parse_args()
|
||||
|
||||
# Download the MNIST data if you haven't downloaded it yet
|
||||
train, test = datasets.mnist.get_mnist(withlabel=True, ndim=1)
|
||||
|
||||
batchsize = args.batchsize
|
||||
epochs = args.epochs
|
||||
run.log('Batch size', np.int(batchsize))
|
||||
run.log('Epochs', np.int(epochs))
|
||||
|
||||
train_iter = iterators.SerialIterator(train, batchsize)
|
||||
test_iter = iterators.SerialIterator(test, batchsize,
|
||||
repeat=False, shuffle=False)
|
||||
|
||||
model = MyNetwork()
|
||||
|
||||
gpu_id = -1 # Set to -1 if you use CPU
|
||||
if gpu_id >= 0:
|
||||
# Make a specified GPU current
|
||||
chainer.backends.cuda.get_device_from_id(0).use()
|
||||
model.to_gpu() # Copy the model to the GPU
|
||||
|
||||
# Choose an optimizer algorithm
|
||||
optimizer = optimizers.MomentumSGD(lr=0.01, momentum=0.9)
|
||||
|
||||
# Give the optimizer a reference to the model so that it
|
||||
# can locate the model's parameters.
|
||||
optimizer.setup(model)
|
||||
|
||||
while train_iter.epoch < epochs:
|
||||
# ---------- One iteration of the training loop ----------
|
||||
train_batch = train_iter.next()
|
||||
image_train, target_train = concat_examples(train_batch, gpu_id)
|
||||
|
||||
# Calculate the prediction of the network
|
||||
prediction_train = model(image_train)
|
||||
|
||||
# Calculate the loss with softmax_cross_entropy
|
||||
loss = F.softmax_cross_entropy(prediction_train, target_train)
|
||||
|
||||
# Calculate the gradients in the network
|
||||
model.cleargrads()
|
||||
loss.backward()
|
||||
|
||||
# Update all the trainable parameters
|
||||
optimizer.update()
|
||||
# --------------------- until here ---------------------
|
||||
|
||||
# Check the validation accuracy of prediction after every epoch
|
||||
if train_iter.is_new_epoch: # If this iteration is the final iteration of the current epoch
|
||||
|
||||
# Display the training loss
|
||||
print('epoch:{:02d} train_loss:{:.04f} '.format(
|
||||
train_iter.epoch, float(to_cpu(loss.array))), end='')
|
||||
|
||||
test_losses = []
|
||||
test_accuracies = []
|
||||
while True:
|
||||
test_batch = test_iter.next()
|
||||
image_test, target_test = concat_examples(test_batch, gpu_id)
|
||||
|
||||
# Forward the test data
|
||||
prediction_test = model(image_test)
|
||||
|
||||
# Calculate the loss
|
||||
loss_test = F.softmax_cross_entropy(prediction_test, target_test)
|
||||
test_losses.append(to_cpu(loss_test.array))
|
||||
|
||||
# Calculate the accuracy
|
||||
accuracy = F.accuracy(prediction_test, target_test)
|
||||
accuracy.to_cpu()
|
||||
test_accuracies.append(accuracy.array)
|
||||
|
||||
if test_iter.is_new_epoch:
|
||||
test_iter.epoch = 0
|
||||
test_iter.current_position = 0
|
||||
test_iter.is_new_epoch = False
|
||||
test_iter._pushed_position = None
|
||||
break
|
||||
|
||||
val_accuracy = np.mean(test_accuracies)
|
||||
print('val_loss:{:.04f} val_accuracy:{:.04f}'.format(
|
||||
np.mean(test_losses), val_accuracy))
|
||||
|
||||
run.log("Accuracy", np.float(val_accuracy))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -0,0 +1,425 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Train and hyperparameter tune with Chainer\n",
|
||||
"\n",
|
||||
"In this tutorial, we demonstrate how to use the Azure ML Python SDK to train a Convolutional Neural Network (CNN) on a single-node GPU with Chainer to perform handwritten digit recognition on the popular MNIST dataset. We will also demonstrate how to perform hyperparameter tuning of the model using Azure ML's HyperDrive service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Diagnostics\n",
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"Diagnostics"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"\n",
|
||||
"set_diagnostics_collection(send_diagnostics=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize workspace\n",
|
||||
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create or Attach existing AmlCompute\n",
|
||||
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource.\n",
|
||||
"\n",
|
||||
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace, this code will skip the creation process.\n",
|
||||
"\n",
|
||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# choose a name for your cluster\n",
|
||||
"cluster_name = \"gpucluster\"\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
|
||||
" print('Found existing compute target.')\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print('Creating a new compute target...')\n",
|
||||
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
|
||||
" max_nodes=4)\n",
|
||||
"\n",
|
||||
" # create the cluster\n",
|
||||
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
|
||||
"\n",
|
||||
" compute_target.wait_for_completion(show_output=True)\n",
|
||||
"\n",
|
||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||
"print(compute_target.get_status().serialize())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train model on the remote compute\n",
|
||||
"Now that you have your data and training script prepared, you are ready to train on your remote compute cluster. You can take advantage of Azure compute to leverage GPUs to cut down your training time. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a project directory\n",
|
||||
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"project_folder = './chainer-mnist'\n",
|
||||
"os.makedirs(project_folder, exist_ok=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Prepare training script\n",
|
||||
"Now you will need to create your training script. In this tutorial, the training script is already provided for you at `chainer_mnist.py`. In practice, you should be able to take any custom training script as is and run it with Azure ML without having to modify your code.\n",
|
||||
"\n",
|
||||
"However, if you would like to use Azure ML's [tracking and metrics](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#metrics) capabilities, you will have to add a small amount of Azure ML code inside your training script. \n",
|
||||
"\n",
|
||||
"In `chainer_mnist.py`, we will log some metrics to our Azure ML run. To do so, we will access the Azure ML `Run` object within the script:\n",
|
||||
"```Python\n",
|
||||
"from azureml.core.run import Run\n",
|
||||
"run = Run.get_context()\n",
|
||||
"```\n",
|
||||
"Further within `chainer_mnist.py`, we log the batchsize and epochs parameters, and the highest accuracy the model achieves:\n",
|
||||
"```Python\n",
|
||||
"run.log('Batch size', np.int(args.batchsize))\n",
|
||||
"run.log('Epochs', np.int(args.epochs))\n",
|
||||
"\n",
|
||||
"run.log('Accuracy', np.float(val_accuracy))\n",
|
||||
"```\n",
|
||||
"These run metrics will become particularly important when we begin hyperparameter tuning our model in the \"Tune model hyperparameters\" section."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Once your script is ready, copy the training script `chainer_mnist.py` into your project directory."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import shutil\n",
|
||||
"\n",
|
||||
"shutil.copy('chainer_mnist.py', project_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create an experiment\n",
|
||||
"Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this Chainer tutorial. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Experiment\n",
|
||||
"\n",
|
||||
"experiment_name = 'chainer-mnist'\n",
|
||||
"experiment = Experiment(ws, name=experiment_name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a Chainer estimator\n",
|
||||
"The Azure ML SDK's Chainer estimator enables you to easily submit Chainer training jobs for both single-node and distributed runs. The following code will define a single-node Chainer job."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.train.dnn import Chainer\n",
|
||||
"\n",
|
||||
"script_params = {\n",
|
||||
" '--epochs': 10,\n",
|
||||
" '--batchsize': 128,\n",
|
||||
" '--output_dir': './outputs'\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"estimator = Chainer(source_directory=project_folder, \n",
|
||||
" script_params=script_params,\n",
|
||||
" compute_target=compute_target,\n",
|
||||
" pip_packages=['numpy', 'pytest'],\n",
|
||||
" entry_script='chainer_mnist.py',\n",
|
||||
" use_gpu=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `script_params` parameter is a dictionary containing the command-line arguments to your training script `entry_script`. To leverage the Azure VM's GPU for training, we set `use_gpu=True`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Submit job\n",
|
||||
"Run your experiment by submitting your estimator object. Note that this call is asynchronous."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run = experiment.submit(estimator)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Monitor your run\n",
|
||||
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"\n",
|
||||
"RunDetails(run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# to get more details of your run\n",
|
||||
"print(run.get_details())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tune model hyperparameters\n",
|
||||
"Now that we've seen how to do a simple Chainer training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Start a hyperparameter sweep\n",
|
||||
"First, we will define the hyperparameter space to sweep over. Let's tune the batch size and epochs parameters. In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, accuracy.\n",
|
||||
"\n",
|
||||
"Then, we specify the early termination policy to use to early terminate poorly performing runs. Here we use the `BanditPolicy`, which will terminate any run that doesn't fall within the slack factor of our primary evaluation metric. In this tutorial, we will apply this policy every epoch (since we report our `Accuracy` metric every epoch and `evaluation_interval=1`). Notice we will delay the first policy evaluation until after the first `3` epochs (`delay_evaluation=3`).\n",
|
||||
"Refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-tune-hyperparameters#specify-an-early-termination-policy) for more information on the BanditPolicy and other policies available."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.train.hyperdrive.runconfig import HyperDriveRunConfig\n",
|
||||
"from azureml.train.hyperdrive.sampling import RandomParameterSampling\n",
|
||||
"from azureml.train.hyperdrive.policy import BanditPolicy\n",
|
||||
"from azureml.train.hyperdrive.run import PrimaryMetricGoal\n",
|
||||
"from azureml.train.hyperdrive.parameter_expressions import choice\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"param_sampling = RandomParameterSampling( {\n",
|
||||
" \"--batchsize\": choice(128, 256),\n",
|
||||
" \"--epochs\": choice(5, 10, 20, 40)\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"hyperdrive_run_config = HyperDriveRunConfig(estimator=estimator,\n",
|
||||
" hyperparameter_sampling=param_sampling, \n",
|
||||
" primary_metric_name='Accuracy',\n",
|
||||
" primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,\n",
|
||||
" max_total_runs=8,\n",
|
||||
" max_concurrent_runs=4)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, lauch the hyperparameter tuning job."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# start the HyperDrive run\n",
|
||||
"hyperdrive_run = experiment.submit(hyperdrive_run_config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Monitor HyperDrive runs\n",
|
||||
"You can monitor the progress of the runs with the following Jupyter widget. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"RunDetails(hyperdrive_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "minxia"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
},
|
||||
"msauthor": "minxia"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -0,0 +1,123 @@
|
||||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License.
|
||||
|
||||
import numpy as np
|
||||
import argparse
|
||||
import os
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
import keras
|
||||
from keras.models import Sequential, model_from_json
|
||||
from keras.layers import Dense
|
||||
from keras.optimizers import RMSprop
|
||||
from keras.callbacks import Callback
|
||||
|
||||
import tensorflow as tf
|
||||
|
||||
from azureml.core import Run
|
||||
from utils import load_data, one_hot_encode
|
||||
|
||||
print("Keras version:", keras.__version__)
|
||||
print("Tensorflow version:", tf.__version__)
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--data-folder', type=str, dest='data_folder', help='data folder mounting point')
|
||||
parser.add_argument('--batch-size', type=int, dest='batch_size', default=50, help='mini batch size for training')
|
||||
parser.add_argument('--first-layer-neurons', type=int, dest='n_hidden_1', default=100,
|
||||
help='# of neurons in the first layer')
|
||||
parser.add_argument('--second-layer-neurons', type=int, dest='n_hidden_2', default=100,
|
||||
help='# of neurons in the second layer')
|
||||
parser.add_argument('--learning-rate', type=float, dest='learning_rate', default=0.001, help='learning rate')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
data_folder = args.data_folder
|
||||
|
||||
print('training dataset is stored here:', data_folder)
|
||||
|
||||
X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0
|
||||
X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0
|
||||
|
||||
y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)
|
||||
y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)
|
||||
|
||||
training_set_size = X_train.shape[0]
|
||||
|
||||
n_inputs = 28 * 28
|
||||
n_h1 = args.n_hidden_1
|
||||
n_h2 = args.n_hidden_2
|
||||
n_outputs = 10
|
||||
n_epochs = 20
|
||||
batch_size = args.batch_size
|
||||
learning_rate = args.learning_rate
|
||||
|
||||
y_train = one_hot_encode(y_train, n_outputs)
|
||||
y_test = one_hot_encode(y_test, n_outputs)
|
||||
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape, sep='\n')
|
||||
|
||||
# Build a simple MLP model
|
||||
model = Sequential()
|
||||
# first hidden layer
|
||||
model.add(Dense(n_h1, activation='relu', input_shape=(n_inputs,)))
|
||||
# second hidden layer
|
||||
model.add(Dense(n_h2, activation='relu'))
|
||||
# output layer
|
||||
model.add(Dense(n_outputs, activation='softmax'))
|
||||
|
||||
model.summary()
|
||||
|
||||
model.compile(loss='categorical_crossentropy',
|
||||
optimizer=RMSprop(lr=learning_rate),
|
||||
metrics=['accuracy'])
|
||||
|
||||
# start an Azure ML run
|
||||
run = Run.get_context()
|
||||
|
||||
|
||||
class LogRunMetrics(Callback):
|
||||
# callback at the end of every epoch
|
||||
def on_epoch_end(self, epoch, log):
|
||||
# log a value repeated which creates a list
|
||||
run.log('Loss', log['loss'])
|
||||
run.log('Accuracy', log['acc'])
|
||||
|
||||
|
||||
history = model.fit(X_train, y_train,
|
||||
batch_size=batch_size,
|
||||
epochs=n_epochs,
|
||||
verbose=2,
|
||||
validation_data=(X_test, y_test),
|
||||
callbacks=[LogRunMetrics()])
|
||||
|
||||
score = model.evaluate(X_test, y_test, verbose=0)
|
||||
|
||||
# log a single value
|
||||
run.log("Final test loss", score[0])
|
||||
print('Test loss:', score[0])
|
||||
|
||||
run.log('Final test accuracy', score[1])
|
||||
print('Test accuracy:', score[1])
|
||||
|
||||
plt.figure(figsize=(6, 3))
|
||||
plt.title('MNIST with Keras MLP ({} epochs)'.format(n_epochs), fontsize=14)
|
||||
plt.plot(history.history['acc'], 'b-', label='Accuracy', lw=4, alpha=0.5)
|
||||
plt.plot(history.history['loss'], 'r--', label='Loss', lw=4, alpha=0.5)
|
||||
plt.legend(fontsize=12)
|
||||
plt.grid(True)
|
||||
|
||||
# log an image
|
||||
run.log_image('Accuracy vs Loss', plot=plt)
|
||||
|
||||
# create a ./outputs/model folder in the compute target
|
||||
# files saved in the "./outputs" folder are automatically uploaded into run history
|
||||
os.makedirs('./outputs/model', exist_ok=True)
|
||||
|
||||
# serialize NN architecture to JSON
|
||||
model_json = model.to_json()
|
||||
# save model JSON
|
||||
with open('./outputs/model/model.json', 'w') as f:
|
||||
f.write(model_json)
|
||||
# save model weights
|
||||
model.save_weights('./outputs/model/model.h5')
|
||||
print("model saved in ./outputs/model folder")
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 119 KiB |
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,27 @@
|
||||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License.
|
||||
|
||||
import gzip
|
||||
import numpy as np
|
||||
import struct
|
||||
|
||||
|
||||
# load compressed MNIST gz files and return numpy arrays
|
||||
def load_data(filename, label=False):
|
||||
with gzip.open(filename) as gz:
|
||||
struct.unpack('I', gz.read(4))
|
||||
n_items = struct.unpack('>I', gz.read(4))
|
||||
if not label:
|
||||
n_rows = struct.unpack('>I', gz.read(4))[0]
|
||||
n_cols = struct.unpack('>I', gz.read(4))[0]
|
||||
res = np.frombuffer(gz.read(n_items[0] * n_rows * n_cols), dtype=np.uint8)
|
||||
res = res.reshape(n_items[0], n_rows * n_cols)
|
||||
else:
|
||||
res = np.frombuffer(gz.read(n_items[0]), dtype=np.uint8)
|
||||
res = res.reshape(n_items[0], 1)
|
||||
return res
|
||||
|
||||
|
||||
# one-hot encode a 1-D array
|
||||
def one_hot_encode(array, num_of_classes):
|
||||
return np.eye(num_of_classes)[array.reshape(-1)]
|
||||
@@ -217,7 +217,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"props = run.upload_file(name='myfile_in_the_cloud.txt', path_or_stream='./myfile.txt')\n",
|
||||
"props = run.upload_file(name='outputs/myfile_in_the_cloud.txt', path_or_stream='./myfile.txt')\n",
|
||||
"props.serialize()"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -227,7 +227,7 @@
|
||||
" private_key_file='./.ssh/id_rsa')\n",
|
||||
" attached_dsvm_compute = ComputeTarget.attach(workspace=ws,\n",
|
||||
" name=compute_target_name,\n",
|
||||
" attach_config=attach_config)\n",
|
||||
" attach_configuration=attach_config)\n",
|
||||
" attached_dsvm_compute.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -81,7 +81,7 @@
|
||||
"from azureml.core import Experiment, Workspace\n",
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
"print(\"This notebook was created using version 1.0.15 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.0.2 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")\n",
|
||||
"print(\"\")\n",
|
||||
"\n",
|
||||
@@ -138,7 +138,6 @@
|
||||
"* We use `start_logging` to create a new run in this experiment\n",
|
||||
"* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.\n",
|
||||
"* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.\n",
|
||||
"* We use `run.take_snapshot()` to capture *this* notebook so we can reproduce this experiment at a later time.\n",
|
||||
"* We use `run.complete()` to indicate that the run is over and results can be captured and finalized"
|
||||
]
|
||||
},
|
||||
@@ -173,9 +172,6 @@
|
||||
"# Save the model to the outputs directory for capture\n",
|
||||
"joblib.dump(value=regression_model, filename='outputs/model.pkl')\n",
|
||||
"\n",
|
||||
"# Take a snapshot of the directory containing this notebook\n",
|
||||
"run.take_snapshot('./')\n",
|
||||
"\n",
|
||||
"# Complete the run\n",
|
||||
"run.complete()"
|
||||
]
|
||||
@@ -238,10 +234,7 @@
|
||||
" run.log(name=\"mse\", value=mse)\n",
|
||||
"\n",
|
||||
" # Save the model to the outputs directory for capture\n",
|
||||
" joblib.dump(value=regression_model, filename='outputs/model.pkl')\n",
|
||||
" \n",
|
||||
" # Capture this notebook with the run\n",
|
||||
" run.take_snapshot('./')\n"
|
||||
" joblib.dump(value=regression_model, filename='outputs/model.pkl')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
1
pr.md
1
pr.md
@@ -34,6 +34,7 @@
|
||||
- [Microsoft introduces Azure service to automatically build AI models](https://venturebeat.com/2018/09/24/microsoft-introduces-azure-service-to-automatically-build-ai-models/) (VentureBeat)
|
||||
|
||||
## Community Projects
|
||||
- [Use Papermill with Azure ML](https://github.com/jreynolds01/papermill_execution_azureml/)
|
||||
- [Fashion MNIST](https://github.com/amynic/azureml-sdk-fashion)
|
||||
- Keras on Databricks
|
||||
- [Samples from CSS](https://github.com/Azure/AMLSamples)
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
"source": [
|
||||
"# Tutorial #1: Train an image classification model with Azure Machine Learning\n",
|
||||
"\n",
|
||||
"In this tutorial, you train a machine learning model both locally and on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning service (preview) in a Python Jupyter notebook. You can then use the notebook as a template to train your own machine learning model with your own data. This tutorial is **part one of a two-part tutorial series**. \n",
|
||||
"In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning service (preview) in a Python Jupyter notebook. You can then use the notebook as a template to train your own machine learning model with your own data. This tutorial is **part one of a two-part tutorial series**. \n",
|
||||
"\n",
|
||||
"This tutorial trains a simple logistic regression using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](http://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing a number from 0 to 9. The goal is to create a multi-class classifier to identify the digit a given image represents. \n",
|
||||
"\n",
|
||||
@@ -31,9 +31,7 @@
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"Use [these instructions](https://aka.ms/aml-how-to-configure-environment) to: \n",
|
||||
"* Create a workspace and its configuration file (**config.json**) \n",
|
||||
"* Save your **config.json** to the same folder as this notebook"
|
||||
"See prerequisites in the [Azure Machine Learning documentation](https://docs.microsoft.com/azure/machine-learning/service/tutorial-train-models-with-aml#prerequisites)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -96,7 +94,7 @@
|
||||
"source": [
|
||||
"# load workspace configuration from the config.json file in the current folder.\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.location, ws.resource_group, ws.location, sep = '\\t')"
|
||||
"print(ws.name, ws.location, ws.resource_group, ws.location, sep='\\t')"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -128,10 +126,10 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create or Attach existing AmlCompute\n",
|
||||
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
|
||||
"### Create or Attach existing compute resource\n",
|
||||
"By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.\n",
|
||||
"\n",
|
||||
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process."
|
||||
"**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -206,12 +204,13 @@
|
||||
"source": [
|
||||
"import urllib.request\n",
|
||||
"\n",
|
||||
"os.makedirs('./data', exist_ok = True)\n",
|
||||
"data_folder = os.path.join(os.getcwd(), 'data')\n",
|
||||
"os.makedirs(data_folder, exist_ok=True)\n",
|
||||
"\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename='./data/train-images.gz')\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename='./data/train-labels.gz')\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename='./data/test-images.gz')\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename='./data/test-labels.gz')"
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'train-images.gz'))\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'train-labels.gz'))\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'test-images.gz'))\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'test-labels.gz'))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -233,11 +232,10 @@
|
||||
"from utils import load_data\n",
|
||||
"\n",
|
||||
"# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.\n",
|
||||
"X_train = load_data('./data/train-images.gz', False) / 255.0\n",
|
||||
"y_train = load_data('./data/train-labels.gz', True).reshape(-1)\n",
|
||||
"\n",
|
||||
"X_test = load_data('./data/test-images.gz', False) / 255.0\n",
|
||||
"y_test = load_data('./data/test-labels.gz', True).reshape(-1)\n",
|
||||
"X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0\n",
|
||||
"X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0\n",
|
||||
"y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)\n",
|
||||
"y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)\n",
|
||||
"\n",
|
||||
"# now let's show some randomly chosen images from the traininng set.\n",
|
||||
"count = 0\n",
|
||||
@@ -279,62 +277,15 @@
|
||||
"ds = ws.get_default_datastore()\n",
|
||||
"print(ds.datastore_type, ds.account_name, ds.container_name)\n",
|
||||
"\n",
|
||||
"ds.upload(src_dir='./data', target_path='mnist', overwrite=True, show_progress=True)"
|
||||
"ds.upload(src_dir=data_folder, target_path='mnist', overwrite=True, show_progress=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You now have everything you need to start training a model. \n",
|
||||
"\n",
|
||||
"## Train a local model\n",
|
||||
"\n",
|
||||
"Train a simple logistic regression model using scikit-learn locally.\n",
|
||||
"\n",
|
||||
"**Training locally can take a minute or two** depending on your computer configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"from sklearn.linear_model import LogisticRegression\n",
|
||||
"\n",
|
||||
"clf = LogisticRegression()\n",
|
||||
"clf.fit(X_train, y_train)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, make predictions using the test set and calculate the accuracy."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"y_hat = clf.predict(X_test)\n",
|
||||
"print(np.average(y_hat == y_test))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"With just a few lines of code, you have a 92% accuracy.\n",
|
||||
"\n",
|
||||
"## Train on a remote cluster\n",
|
||||
"\n",
|
||||
"Now you can expand on this simple model by building a model with a different regularization rate. This time you'll train the model on a remote resource. \n",
|
||||
"\n",
|
||||
"For this task, submit the job to the remote training cluster you set up earlier. To submit a job you:\n",
|
||||
"* Create a directory\n",
|
||||
"* Create a training script\n",
|
||||
@@ -352,7 +303,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"script_folder = './sklearn-mnist'\n",
|
||||
"import os\n",
|
||||
"script_folder = os.path.join(os.getcwd(), \"sklearn-mnist\")\n",
|
||||
"os.makedirs(script_folder, exist_ok=True)"
|
||||
]
|
||||
},
|
||||
@@ -362,7 +314,7 @@
|
||||
"source": [
|
||||
"### Create a training script\n",
|
||||
"\n",
|
||||
"To submit the job to the cluster, first create a training script. Run the following code to create the training script called `train.py` in the directory you just created. This training adds a regularization rate to the training algorithm, so produces a slightly different model than the local version."
|
||||
"To submit the job to the cluster, first create a training script. Run the following code to create the training script called `train.py` in the directory you just created. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -389,7 +341,7 @@
|
||||
"parser.add_argument('--regularization', type=float, dest='reg', default=0.01, help='regularization rate')\n",
|
||||
"args = parser.parse_args()\n",
|
||||
"\n",
|
||||
"data_folder = os.path.join(args.data_folder, 'mnist')\n",
|
||||
"data_folder = args.data_folder\n",
|
||||
"print('Data folder:', data_folder)\n",
|
||||
"\n",
|
||||
"# load train and test set into numpy arrays\n",
|
||||
@@ -474,7 +426,7 @@
|
||||
"* Parameters required from the training script \n",
|
||||
"* Python packages needed for training\n",
|
||||
"\n",
|
||||
"In this tutorial, this target is AmlCompute. All files in the script folder are uploaded into the cluster nodes for execution. The data_folder is set to use the datastore (`ds.as_mount()`)."
|
||||
"In this tutorial, this target is AmlCompute. All files in the script folder are uploaded into the cluster nodes for execution. The data_folder is set to use the datastore (`ds.path('mnist').as_mount()`)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -490,8 +442,8 @@
|
||||
"from azureml.train.estimator import Estimator\n",
|
||||
"\n",
|
||||
"script_params = {\n",
|
||||
" '--data-folder': ds.as_mount(),\n",
|
||||
" '--regularization': 0.8\n",
|
||||
" '--data-folder': ds.path('mnist').as_mount(),\n",
|
||||
" '--regularization': 0.05\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"est = Estimator(source_directory=script_folder,\n",
|
||||
@@ -501,13 +453,29 @@
|
||||
" conda_packages=['scikit-learn'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This is what the mounting point looks like:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(ds.path('mnist').as_mount())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Submit the job to the cluster\n",
|
||||
"\n",
|
||||
"Run the experiment by submitting the estimator object."
|
||||
"Run the experiment by submitting the estimator object. And you can navigate to Azure portal to monitor the run."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -534,17 +502,17 @@
|
||||
"\n",
|
||||
"## Monitor a remote run\n",
|
||||
"\n",
|
||||
"In total, the first run takes **approximately 10 minutes**. But for subsequent runs, as long as the script dependencies don't change, the same image is reused and hence the container start up time is much faster.\n",
|
||||
"In total, the first run takes **approximately 10 minutes**. But for subsequent runs, as long as the dependencies (`conda_packages` parameter in the above estimator constructor) don't change, the same image is reused and hence the container start up time is much faster.\n",
|
||||
"\n",
|
||||
"Here is what's happening while you wait:\n",
|
||||
"\n",
|
||||
"- **Image creation**: A Docker image is created matching the Python environment specified by the estimator. The image is uploaded to the workspace. Image creation and uploading takes **about 5 minutes**. \n",
|
||||
"- **Image creation**: A Docker image is created matching the Python environment specified by the estimator. The image is built and stored in the ACR (Azure Container Registry) associated with your workspace. Image creation and uploading takes **about 5 minutes**. \n",
|
||||
"\n",
|
||||
" This stage happens once for each Python environment since the container is cached for subsequent runs. During image creation, logs are streamed to the run history. You can monitor the image creation progress using these logs.\n",
|
||||
"\n",
|
||||
"- **Scaling**: If the remote cluster requires more nodes to execute the run than currently available, additional nodes are added automatically. Scaling typically takes **about 5 minutes.**\n",
|
||||
"\n",
|
||||
"- **Running**: In this stage, the necessary scripts and files are sent to the compute target, then data stores are mounted/copied, then the entry_script is run. While the job is running, stdout and the ./logs directory are streamed to the run history. You can monitor the run's progress using these logs.\n",
|
||||
"- **Running**: In this stage, the necessary scripts and files are sent to the compute target, then data stores are mounted/copied, then the entry_script is run. While the job is running, stdout and the files in the ./logs directory are streamed to the run history. You can monitor the run's progress using these logs.\n",
|
||||
"\n",
|
||||
"- **Post-Processing**: The ./outputs directory of the run is copied over to the run history in your workspace so you can access these results.\n",
|
||||
"\n",
|
||||
@@ -574,7 +542,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run)."
|
||||
"By the way, if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -583,7 +551,7 @@
|
||||
"source": [
|
||||
"### Get log results upon completion\n",
|
||||
"\n",
|
||||
"Model training and monitoring happen in the background. Wait until the model has completed training before running more code. Use `wait_for_completion` to show when the model training is complete."
|
||||
"Model training happens in the background. You can use `wait_for_completion` to block and wait until the model has completed training before running more code. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -598,7 +566,8 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run.wait_for_completion(show_output=False) # specify True for a verbose log"
|
||||
"# specify show_output to True for a verbose log\n",
|
||||
"run.wait_for_completion(show_output=False) "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -607,7 +576,7 @@
|
||||
"source": [
|
||||
"### Display run results\n",
|
||||
"\n",
|
||||
"You now have a model trained on a remote cluster. Retrieve the accuracy of the model:"
|
||||
"You now have a model trained on a remote cluster. Retrieve all the metrics logged during the run, including the accuracy of the model:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -668,7 +637,7 @@
|
||||
"source": [
|
||||
"# register model \n",
|
||||
"model = run.register_model(model_name='sklearn_mnist', model_path='outputs/sklearn_mnist_model.pkl')\n",
|
||||
"print(model.name, model.id, model.version, sep = '\\t')"
|
||||
"print(model.name, model.id, model.version, sep='\\t')"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -681,8 +650,7 @@
|
||||
"\n",
|
||||
"> * Set up your development environment\n",
|
||||
"> * Access and examine the data\n",
|
||||
"> * Train a simple logistic regression locally using the popular scikit-learn machine learning library\n",
|
||||
"> * Train multiple models on a remote cluster\n",
|
||||
"> * Train multiple models on a remote cluster using the popular scikit-learn machine learning library\n",
|
||||
"> * Review training details and register the best model\n",
|
||||
"\n",
|
||||
"You are ready to deploy this registered model using the instructions in the next part of the tutorial series:\n",
|
||||
@@ -712,9 +680,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.2"
|
||||
"version": "3.6.8"
|
||||
},
|
||||
"msauthor": "sgilley"
|
||||
"msauthor": "haining"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
"> * Deploy the model to ACI\n",
|
||||
"> * Test the deployed model\n",
|
||||
"\n",
|
||||
"ACI is not ideal for production deployments, but it is great for testing and understanding the workflow. For scalable production deployments, consider using AKS.\n",
|
||||
"ACI is a great solution for testing and understanding the workflow. For scalable production deployments, consider using Azure Kubernetes Service. For more information, see [how to deploy and where](https://docs.microsoft.com/azure/machine-learning/service/how-to-deploy-and-where).\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
@@ -68,10 +68,12 @@
|
||||
"import os\n",
|
||||
"import urllib.request\n",
|
||||
"\n",
|
||||
"os.makedirs('./data', exist_ok=True)\n",
|
||||
"data_folder = os.path.join(os.getcwd(), 'data')\n",
|
||||
"os.makedirs(data_folder, exist_ok = True)\n",
|
||||
"\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename='./data/test-images.gz')\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename='./data/test-labels.gz')"
|
||||
"\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'test-images.gz'))\n",
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'test-labels.gz'))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -101,7 +103,7 @@
|
||||
"import numpy as np\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
" \n",
|
||||
"import azureml\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"# display the core SDK version number\n",
|
||||
"print(\"Azure ML SDK Version: \", azureml.core.VERSION)"
|
||||
@@ -127,11 +129,18 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"import os \n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"model=Model(ws, 'sklearn_mnist')\n",
|
||||
"model.download(target_dir='.', exist_ok=True)\n",
|
||||
"\n",
|
||||
"model.download(target_dir=os.getcwd(), exist_ok=True)\n",
|
||||
"\n",
|
||||
"# verify the downloaded model file\n",
|
||||
"os.stat('./sklearn_mnist_model.pkl')"
|
||||
"file_path = os.path.join(os.getcwd(), \"sklearn_mnist_model.pkl\")\n",
|
||||
"\n",
|
||||
"os.stat(file_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -157,10 +166,12 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from utils import load_data\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"data_folder = os.path.join(os.getcwd(), 'data')\n",
|
||||
"# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster\n",
|
||||
"X_test = load_data('./data/test-images.gz', False) / 255.0\n",
|
||||
"y_test = load_data('./data/test-labels.gz', True).reshape(-1)"
|
||||
"X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0\n",
|
||||
"y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -181,7 +192,7 @@
|
||||
"import pickle\n",
|
||||
"from sklearn.externals import joblib\n",
|
||||
"\n",
|
||||
"clf = joblib.load('./sklearn_mnist_model.pkl')\n",
|
||||
"clf = joblib.load( os.path.join(os.getcwd(), 'sklearn_mnist_model.pkl'))\n",
|
||||
"y_hat = clf.predict(X_test)"
|
||||
]
|
||||
},
|
||||
@@ -220,7 +231,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# normalize the diagnal cells so that they don't overpower the rest of the cells when visualized\n",
|
||||
"# normalize the diagonal cells so that they don't overpower the rest of the cells when visualized\n",
|
||||
"row_sums = conf_mx.sum(axis=1, keepdims=True)\n",
|
||||
"norm_conf_mx = conf_mx / row_sums\n",
|
||||
"np.fill_diagonal(norm_conf_mx, 0)\n",
|
||||
@@ -282,7 +293,7 @@
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model\n",
|
||||
" # retreive the path to the model file using the model name\n",
|
||||
" # retrieve the path to the model file using the model name\n",
|
||||
" model_path = Model.get_model_path('sklearn_mnist')\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
|
||||
@@ -13,14 +13,16 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Tutorial (part 1): Prepare data for regression modeling"
|
||||
"# Tutorial: Prepare data for regression modeling"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In this tutorial, you learn how to prep data for regression modeling using the Azure Machine Learning Data Prep SDK. Perform various transformations to filter and combine two different NYC Taxi data sets. The end goal of this tutorial set is to predict the cost of a taxi trip by training a model on data features including pickup hour, day of week, number of passengers, and coordinates. This tutorial is part one of a two-part tutorial series.\n",
|
||||
"In this tutorial, you learn how to prepare data for regression modeling by using the Azure Machine Learning Data Prep SDK. You run various transformations to filter and combine two different NYC taxi data sets.\n",
|
||||
"\n",
|
||||
"This tutorial is **part one of a two-part tutorial series**. After you complete the tutorial series, you can predict the cost of a taxi trip by training a model on data features. These features include the pickup day and time, the number of passengers, and the pickup location.\n",
|
||||
"\n",
|
||||
"In this tutorial, you:\n",
|
||||
"\n",
|
||||
@@ -29,17 +31,39 @@
|
||||
"> * Load two datasets with different field names\n",
|
||||
"> * Cleanse data to remove anomalies\n",
|
||||
"> * Transform data using intelligent transforms to create new features\n",
|
||||
"> * Save your dataflow object to use in a regression model\n",
|
||||
"\n",
|
||||
"You can prepare your data in Python using the [Azure Machine Learning Data Prep SDK](https://aka.ms/data-prep-sdk)."
|
||||
"> * Save your dataflow object to use in a regression model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Import packages\n",
|
||||
"Begin by importing the SDK."
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"To run the notebook you will need:\n",
|
||||
"\n",
|
||||
"* A Python 3.6 notebook server with the following installed:\n",
|
||||
" * The Azure Machine Learning Data Prep SDK for Python\n",
|
||||
"* The tutorial notebook\n",
|
||||
"\n",
|
||||
"Navigate back to the [tutorial page](https://docs.microsoft.com/azure/machine-learning/service/tutorial-data-prep) for specific environment setup instructions.\n",
|
||||
"\n",
|
||||
"## <a name=\"start\"></a>Set up your development environment\n",
|
||||
"\n",
|
||||
"All the setup for your development work can be accomplished in a Python notebook. Setup includes the following actions:\n",
|
||||
"\n",
|
||||
"* Install the SDK\n",
|
||||
"* Import Python packages\n",
|
||||
"\n",
|
||||
"### Install and import packages\n",
|
||||
"\n",
|
||||
"Use the following to install necessary packages if you don't already have them.\n",
|
||||
"\n",
|
||||
"```shell\n",
|
||||
"pip install azureml-dataprep\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Import the SDK."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -65,17 +89,25 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from IPython.display import display\n",
|
||||
"dataset_root = \"https://dprepdata.blob.core.windows.net/demo\"\n",
|
||||
"\n",
|
||||
"green_path = \"/\".join([dataset_root, \"green-small/*\"])\n",
|
||||
"yellow_path = \"/\".join([dataset_root, \"yellow-small/*\"])\n",
|
||||
"\n",
|
||||
"green_df = dprep.read_csv(path=green_path, header=dprep.PromoteHeadersMode.GROUPED)\n",
|
||||
"# auto_read_file will automatically identify and parse the file type, and is useful if you don't know the file type\n",
|
||||
"yellow_df = dprep.auto_read_file(path=yellow_path)\n",
|
||||
"green_df_raw = dprep.read_csv(path=green_path, header=dprep.PromoteHeadersMode.GROUPED)\n",
|
||||
"# auto_read_file automatically identifies and parses the file type, which is useful when you don't know the file type.\n",
|
||||
"yellow_df_raw = dprep.auto_read_file(path=yellow_path)\n",
|
||||
"\n",
|
||||
"green_df.head(5)\n",
|
||||
"yellow_df.head(5)"
|
||||
"display(green_df_raw.head(5))\n",
|
||||
"display(yellow_df_raw.head(5))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A `Dataflow` object is similar to a dataframe, and represents a series of lazily-evaluated, immutable operations on data. Operations can be added by invoking the different transformation and filtering methods available. The result of adding an operation to a `Dataflow` is always a new `Dataflow` object."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -89,7 +121,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now you populate some variables with shortcut transforms that will apply to all dataflows. The variable `drop_if_all_null` will be used to delete records where all fields are null. The variable `useful_columns` holds an array of column descriptions that are retained in each dataflow."
|
||||
"Now you populate some variables with shortcut transforms to apply to all dataflows. The `drop_if_all_null` variable is used to delete records where all fields are null. The `useful_columns` variable holds an array of column descriptions that are kept in each dataflow."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -110,7 +142,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You first work with the green taxi data and get it into a valid shape that can be combined with the yellow taxi data. Create a temporary dataflow `tmp_df`, and call the `replace_na()`, `drop_nulls()`, and `keep_columns()` functions using the shortcut transform variables you created. Additionally, rename all the columns in the dataframe to match the names in `useful_columns`."
|
||||
"You first work with the green taxi data to get it into a valid shape that can be combined with the yellow taxi data. Call the `replace_na()`, `drop_nulls()`, and `keep_columns()` functions by using the shortcut transform variables you created. Additionally, rename all the columns in the dataframe to match the names in the `useful_columns` variable."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -119,7 +151,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tmp_df = (green_df\n",
|
||||
"green_df = (green_df_raw\n",
|
||||
" .replace_na(columns=all_columns)\n",
|
||||
" .drop_nulls(*drop_if_all_null)\n",
|
||||
" .rename_columns(column_pairs={\n",
|
||||
@@ -138,14 +170,14 @@
|
||||
" \"Trip_distance\": \"distance\"\n",
|
||||
" })\n",
|
||||
" .keep_columns(columns=useful_columns))\n",
|
||||
"tmp_df.head(5)"
|
||||
"green_df.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Overwrite the `green_df` variable with the transforms performed on `tmp_df` in the previous step."
|
||||
"Run the same transformation steps on the yellow taxi data. These functions ensure that null data is removed from the data set, which will help increase machine learning model accuracy."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -154,23 +186,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"green_df = tmp_df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Perform the same transformation steps to the yellow taxi data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tmp_df = (yellow_df\n",
|
||||
"yellow_df = (yellow_df_raw\n",
|
||||
" .replace_na(columns=all_columns)\n",
|
||||
" .drop_nulls(*drop_if_all_null)\n",
|
||||
" .rename_columns(column_pairs={\n",
|
||||
@@ -195,14 +211,14 @@
|
||||
" \"trip_distance\": \"distance\"\n",
|
||||
" })\n",
|
||||
" .keep_columns(columns=useful_columns))\n",
|
||||
"tmp_df.head(5)"
|
||||
"yellow_df.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Again, overwrite `yellow_df` with `tmp_df`, and then call the `append_rows()` function on the green taxi data to append the yellow taxi data, creating a new combined dataframe."
|
||||
"Call the `append_rows()` function on the green taxi data to append the yellow taxi data. A new combined dataframe is created."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -211,7 +227,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"yellow_df = tmp_df\n",
|
||||
"combined_df = green_df.append_rows([yellow_df])"
|
||||
]
|
||||
},
|
||||
@@ -226,7 +241,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Examine the pickup and drop-off coordinates summary statistics to see how the data is distributed. First define a `TypeConverter` object to change the lat/long fields to decimal type. Next, call the `keep_columns()` function to restrict output to only the lat/long fields, and then call `get_profile()`."
|
||||
"Examine the pickup and drop-off coordinates summary statistics to see how the data is distributed. First, define a `TypeConverter` object to change the latitude and longitude fields to decimal type. Next, call the `keep_columns()` function to restrict output to only the latitude and longitude fields, and then call the `get_profile()` function. These function calls create a condensed view of the dataflow to just show the lat/long fields, which makes it easier to evaluate missing or out-of-scope coordinates."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -243,7 +258,7 @@
|
||||
" \"dropoff_latitude\": decimal_type\n",
|
||||
"})\n",
|
||||
"combined_df.keep_columns(columns=[\n",
|
||||
" \"pickup_longitude\", \"pickup_latitude\", \n",
|
||||
" \"pickup_longitude\", \"pickup_latitude\",\n",
|
||||
" \"dropoff_longitude\", \"dropoff_latitude\"\n",
|
||||
"]).get_profile()"
|
||||
]
|
||||
@@ -252,7 +267,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"From the summary statistics output, you see that there are coordinates that are missing, and coordinates that are not in New York City. Filter out coordinates not in the city border by chaining column filter commands within the `filter()` function, and defining minimum and maximum bounds for each field. Then call `get_profile()` again to verify the transformation."
|
||||
"From the summary statistics output, you see there are missing coordinates and coordinates that aren't in New York City (this is determined from subjective analysis). Filter out coordinates for locations that are outside the city border. Chain the column filter commands within the `filter()` function and define the minimum and maximum bounds for each field. Then call the `get_profile()` function again to verify the transformation."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -261,11 +276,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tmp_df = (combined_df\n",
|
||||
"latlong_filtered_df = (combined_df\n",
|
||||
" .drop_nulls(\n",
|
||||
" columns=[\"pickup_longitude\", \"pickup_latitude\", \"dropoff_longitude\", \"dropoff_latitude\"],\n",
|
||||
" column_relationship=dprep.ColumnRelationship(dprep.ColumnRelationship.ANY)\n",
|
||||
" ) \n",
|
||||
" )\n",
|
||||
" .filter(dprep.f_and(\n",
|
||||
" dprep.col(\"pickup_longitude\") <= -73.72,\n",
|
||||
" dprep.col(\"pickup_longitude\") >= -74.09,\n",
|
||||
@@ -276,28 +291,12 @@
|
||||
" dprep.col(\"dropoff_latitude\") <= 40.88,\n",
|
||||
" dprep.col(\"dropoff_latitude\") >= 40.53\n",
|
||||
" )))\n",
|
||||
"tmp_df.keep_columns(columns=[\n",
|
||||
" \"pickup_longitude\", \"pickup_latitude\", \n",
|
||||
"latlong_filtered_df.keep_columns(columns=[\n",
|
||||
" \"pickup_longitude\", \"pickup_latitude\",\n",
|
||||
" \"dropoff_longitude\", \"dropoff_latitude\"\n",
|
||||
"]).get_profile()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Overwrite `combined_df` with the transformations you made to `tmp_df`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"combined_df = tmp_df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -309,7 +308,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Look at the data profile for the `store_forward` column."
|
||||
"Look at the data profile for the `store_forward` column. This field is a boolean flag that is `Y` when the taxi did not have a connection to the server after the trip, and thus had to store the trip data in memory, and later forward it to the server when connected."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -318,14 +317,14 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"combined_df.keep_columns(columns='store_forward').get_profile()"
|
||||
"latlong_filtered_df.keep_columns(columns='store_forward').get_profile()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"From the data profile output of `store_forward`, you see that the data is inconsistent and there are missing/null values. Replace these values using the `replace()` and `fill_nulls()` functions, and in both cases change to the string \"N\"."
|
||||
"Notice that the data profile output in the `store_forward` column shows that the data is inconsistent and there are missing or null values. Use the `replace()` and `fill_nulls()` functions to replace these values with the string \"N\":"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -334,14 +333,14 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"combined_df = combined_df.replace(columns=\"store_forward\", find=\"0\", replace_with=\"N\").fill_nulls(\"store_forward\", \"N\")"
|
||||
"replaced_stfor_vals_df = latlong_filtered_df.replace(columns=\"store_forward\", find=\"0\", replace_with=\"N\").fill_nulls(\"store_forward\", \"N\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Execute another `replace` function, this time on the `distance` field. This reformats distance values that are incorrectly labeled as `.00`, and fills any nulls with zeros. Convert the `distance` field to numerical format."
|
||||
"Execute the `replace` function on the `distance` field. The function reformats distance values that are incorrectly labeled as `.00`, and fills any nulls with zeros. Convert the `distance` field to numerical format. These incorrect data points are likely anomolies in the data collection system on the taxi cabs."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -350,15 +349,15 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"combined_df = combined_df.replace(columns=\"distance\", find=\".00\", replace_with=0).fill_nulls(\"distance\", 0)\n",
|
||||
"combined_df = combined_df.to_number([\"distance\"])"
|
||||
"replaced_distance_vals_df = replaced_stfor_vals_df.replace(columns=\"distance\", find=\".00\", replace_with=0).fill_nulls(\"distance\", 0)\n",
|
||||
"replaced_distance_vals_df = replaced_distance_vals_df.to_number([\"distance\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Split the pick up and drop off datetimes into respective date and time columns. Use `split_column_by_example()` to perform the split. In this case, the optional `example` parameter of `split_column_by_example()` is omitted. Therefore the function will automatically determine where to split based on the data."
|
||||
"Split the pickup and dropoff datetime values into the respective date and time columns. Use the `split_column_by_example()` function to make the split. In this case, the optional `example` parameter of the `split_column_by_example()` function is omitted. Therefore, the function automatically determines where to split based on the data."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -367,10 +366,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tmp_df = (combined_df\n",
|
||||
"time_split_df = (replaced_distance_vals_df\n",
|
||||
" .split_column_by_example(source_column=\"pickup_datetime\")\n",
|
||||
" .split_column_by_example(source_column=\"dropoff_datetime\"))\n",
|
||||
"tmp_df.head(5)"
|
||||
"time_split_df.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -386,21 +385,21 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tmp_df_renamed = (tmp_df\n",
|
||||
"renamed_col_df = (time_split_df\n",
|
||||
" .rename_columns(column_pairs={\n",
|
||||
" \"pickup_datetime_1\": \"pickup_date\",\n",
|
||||
" \"pickup_datetime_2\": \"pickup_time\",\n",
|
||||
" \"dropoff_datetime_1\": \"dropoff_date\",\n",
|
||||
" \"dropoff_datetime_2\": \"dropoff_time\"\n",
|
||||
" }))\n",
|
||||
"tmp_df_renamed.head(5)"
|
||||
"renamed_col_df.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Overwrite `combined_df` with the executed transformations, and then call `get_profile()` to see full summary statistics after all transformations."
|
||||
"Call the `get_profile()` function to see the full summary statistics after all cleansing steps."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -409,8 +408,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"combined_df = tmp_df_renamed\n",
|
||||
"combined_df.get_profile()"
|
||||
"renamed_col_df.get_profile()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -424,9 +422,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Split the pickup and drop-off date further into day of week, day of month, and month. To get day of week, use the `derive_column_by_example()` function. This function takes as a parameter an array of example objects that define the input data, and the desired output. The function then automatically determines your desired transformation. For pickup and drop-off time columns, split into hour, minute, and second using the `split_column_by_example()` function with no example parameter.\n",
|
||||
"Split the pickup and dropoff date further into the day of the week, day of the month, and month values. To get the day of the week value, use the `derive_column_by_example()` function. The function takes an array parameter of example objects that define the input data, and the preferred output. The function automatically determines your preferred transformation. For the pickup and dropoff time columns, split the time into the hour, minute, and second by using the `split_column_by_example()` function with no example parameter.\n",
|
||||
"\n",
|
||||
"Once you have generated these new features, delete the original fields in favor of the newly generated features using `drop_columns()`. Rename all remaining fields to accurate descriptions."
|
||||
"After you generate the new features, use the `drop_columns()` function to delete the original fields as the newly generated features are preferred. Rename the rest of the fields to use meaningful descriptions.\n",
|
||||
"\n",
|
||||
"Transforming the data in this way to create new time-based features will improve machine learning model accuracy. For example, generating a new feature for the weekday will help establish a relationship between the day of the week and the taxi fare price, which is often more expensive on certain days of the week due to high demand."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -435,10 +435,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tmp_df = (combined_df\n",
|
||||
"transformed_features_df = (renamed_col_df\n",
|
||||
" .derive_column_by_example(\n",
|
||||
" source_columns=\"pickup_date\", \n",
|
||||
" new_column_name=\"pickup_weekday\", \n",
|
||||
" source_columns=\"pickup_date\",\n",
|
||||
" new_column_name=\"pickup_weekday\",\n",
|
||||
" example_data=[(\"2009-01-04\", \"Sunday\"), (\"2013-08-22\", \"Thursday\")]\n",
|
||||
" )\n",
|
||||
" .derive_column_by_example(\n",
|
||||
@@ -446,17 +446,17 @@
|
||||
" new_column_name=\"dropoff_weekday\",\n",
|
||||
" example_data=[(\"2013-08-22\", \"Thursday\"), (\"2013-11-03\", \"Sunday\")]\n",
|
||||
" )\n",
|
||||
" \n",
|
||||
"\n",
|
||||
" .split_column_by_example(source_column=\"pickup_time\")\n",
|
||||
" .split_column_by_example(source_column=\"dropoff_time\")\n",
|
||||
" # the following two split_column_by_example calls reference the generated column names from the above two calls\n",
|
||||
" # The following two calls to split_column_by_example reference the column names generated from the previous two calls.\n",
|
||||
" .split_column_by_example(source_column=\"pickup_time_1\")\n",
|
||||
" .split_column_by_example(source_column=\"dropoff_time_1\")\n",
|
||||
" .drop_columns(columns=[\n",
|
||||
" \"pickup_date\", \"pickup_time\", \"dropoff_date\", \"dropoff_time\", \n",
|
||||
" \"pickup_date\", \"pickup_time\", \"dropoff_date\", \"dropoff_time\",\n",
|
||||
" \"pickup_date_1\", \"dropoff_date_1\", \"pickup_time_1\", \"dropoff_time_1\"\n",
|
||||
" ])\n",
|
||||
" \n",
|
||||
"\n",
|
||||
" .rename_columns(column_pairs={\n",
|
||||
" \"pickup_date_2\": \"pickup_month\",\n",
|
||||
" \"pickup_date_3\": \"pickup_monthday\",\n",
|
||||
@@ -470,14 +470,14 @@
|
||||
" \"dropoff_time_2\": \"dropoff_second\"\n",
|
||||
" }))\n",
|
||||
"\n",
|
||||
"tmp_df.head(5)"
|
||||
"transformed_features_df.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"From the data above, you see that the pickup and drop-off date and time components produced from the derived transformations are correct. Drop the `pickup_datetime` and `dropoff_datetime` columns as they are no longer needed."
|
||||
"Notice that the data shows that the pickup and dropoff date and time components produced from the derived transformations are correct. Drop the `pickup_datetime` and `dropoff_datetime` columns because they're no longer needed (granular time features like hour, minute and second are more useful for model training)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -486,7 +486,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tmp_df = tmp_df.drop_columns(columns=[\"pickup_datetime\", \"dropoff_datetime\"])"
|
||||
"processed_df = transformed_features_df.drop_columns(columns=[\"pickup_datetime\", \"dropoff_datetime\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -502,7 +502,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"type_infer = tmp_df.builders.set_column_types()\n",
|
||||
"type_infer = processed_df.builders.set_column_types()\n",
|
||||
"type_infer.learn()\n",
|
||||
"type_infer"
|
||||
]
|
||||
@@ -511,7 +511,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The inference results look correct based on the data, now apply the type conversions to the dataflow."
|
||||
"The inference results look correct based on the data. Now apply the type conversions to the dataflow."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -520,15 +520,15 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tmp_df = type_infer.to_dataflow()\n",
|
||||
"tmp_df.get_profile()"
|
||||
"type_converted_df = type_infer.to_dataflow()\n",
|
||||
"type_converted_df.get_profile()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Before packaging the dataflow, perform two final filters on the data set. To eliminate incorrect data points, filter the dataflow on records where both the `cost` and `distance` are greater than zero."
|
||||
"Before you package the dataflow, run two final filters on the data set. To eliminate incorrectly captured data points, filter the dataflow on records where both the `cost` and `distance` variable values are greater than zero. This step will significantly improve machine learning model accuracy, because data points with a zero cost or distance represent major outliers that throw off prediction accuracy."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -537,15 +537,15 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tmp_df = tmp_df.filter(dprep.col(\"distance\") > 0)\n",
|
||||
"tmp_df = tmp_df.filter(dprep.col(\"cost\") > 0)"
|
||||
"final_df = type_converted_df.filter(dprep.col(\"distance\") > 0)\n",
|
||||
"final_df = final_df.filter(dprep.col(\"cost\") > 0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"At this point, you have a fully transformed and prepared dataflow object to use in a machine learning model. The DataPrep SDK includes object serialization functionality, which is used as follows."
|
||||
"You now have a fully transformed and prepared dataflow object to use in a machine learning model. The SDK includes object serialization functionality, which is used as shown in the following code."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -557,8 +557,7 @@
|
||||
"import os\n",
|
||||
"file_path = os.path.join(os.getcwd(), \"dflows.dprep\")\n",
|
||||
"\n",
|
||||
"dflow_prepared = tmp_df\n",
|
||||
"package = dprep.Package([dflow_prepared])\n",
|
||||
"package = dprep.Package([final_df])\n",
|
||||
"package.save(file_path)"
|
||||
]
|
||||
},
|
||||
@@ -573,7 +572,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Delete the file `dflows.dprep` (whether you are running locally or in Azure Notebooks) in your current directory if you do not wish to continue with part two of the tutorial. If you continue on to part two, you will need the `dflows.dprep` file in the current directory."
|
||||
"To continue with part two of the tutorial, you need the **dflows.dprep** file in the current directory.\n",
|
||||
"\n",
|
||||
"If you don't plan to continue to part two, delete the **dflows.dprep** file in your current directory. Delete this file whether you're running the execution locally or in [Azure Notebooks](https://notebooks.azure.com/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -11,20 +11,19 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Tutorial (part 2): Use automated machine learning to build your regression model \n",
|
||||
"# Tutorial: Use automated machine learning to build your regression model\n",
|
||||
"\n",
|
||||
"This tutorial is **part two of a two-part tutorial series**. In the previous tutorial, you [prepared the NYC taxi data for regression modeling](regression-part1-data-prep.ipynb).\n",
|
||||
"\n",
|
||||
"Now, you're ready to start building your model with Azure Machine Learning service. In this part of the tutorial, you will use the prepared data and automatically generate a regression model to predict taxi fare prices. Using the automated ML capabilities of the service, you define your machine learning goals and constraints, launch the automated machine learning process and then allow the algorithm selection and hyperparameter-tuning to happen for you. The automated ML technique iterates over many combinations of algorithms and hyperparameters until it finds the best model based on your criterion.\n",
|
||||
"Now you're ready to start building your model with Azure Machine Learning service. In this part of the tutorial, you use the prepared data and automatically generate a regression model to predict taxi fare prices. By using the automated machine learning capabilities of the service, you define your machine learning goals and constraints. You launch the automated machine learning process. Then allow the algorithm selection and hyperparameter tuning to happen for you. The automated machine learning technique iterates over many combinations of algorithms and hyperparameters until it finds the best model based on your criterion.\n",
|
||||
"\n",
|
||||
"In this tutorial, you learn how to:\n",
|
||||
"In this tutorial, you learn the following tasks:\n",
|
||||
"\n",
|
||||
"> * Setup a Python environment and import the SDK packages\n",
|
||||
"> * Set up a Python environment and import the SDK packages\n",
|
||||
"> * Configure an Azure Machine Learning service workspace\n",
|
||||
"> * Auto-train a regression model \n",
|
||||
"> * Run the model locally with custom parameters\n",
|
||||
"> * Explore the results\n",
|
||||
"> * Register the best model\n",
|
||||
"\n",
|
||||
"If you don\u00e2\u20ac\u2122t have an Azure subscription, create a [free account](https://aka.ms/AMLfree) before you begin. \n",
|
||||
"\n",
|
||||
@@ -33,17 +32,40 @@
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"> * [Run the data preparation tutorial](regression-part1-data-prep.ipynb)\n",
|
||||
"To run the notebook you will need:\n",
|
||||
"\n",
|
||||
"> * Automated machine learning configured environment e.g. Azure notebooks, Local Python environment or Data Science Virtual Machine. [Setup](https://docs.microsoft.com/azure/machine-learning/service/samples-notebooks) automated machine learning."
|
||||
"* [Run the data preparation tutorial](regression-part1-data-prep.ipynb).\n",
|
||||
"* A Python 3.6 notebook server with the following installed:\n",
|
||||
" * The Azure Machine Learning SDK for Python with `automl` and `notebooks` extras\n",
|
||||
" * `matplotlib`\n",
|
||||
"* The tutorial notebook\n",
|
||||
"* A machine learning workspace\n",
|
||||
"* The configuration file for the workspace in the same directory as the notebook\n",
|
||||
"\n",
|
||||
"Navigate back to the [tutorial page](https://docs.microsoft.com/azure/machine-learning/service/tutorial-auto-train-models) for specific environment setup instructions.\n",
|
||||
"\n",
|
||||
"## <a name=\"start\"></a>Set up your development environment\n",
|
||||
"\n",
|
||||
"All the setup for your development work can be accomplished in a Python notebook. Setup includes the following actions:\n",
|
||||
"\n",
|
||||
"* Install the SDK\n",
|
||||
"* Import Python packages\n",
|
||||
"* Configure your workspace"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Import packages\n",
|
||||
"Import Python packages you need in this tutorial."
|
||||
"### Install and import packages\n",
|
||||
"\n",
|
||||
"If you are following the tutorial in your own Python environment, use the following to install necessary packages.\n",
|
||||
"\n",
|
||||
"```shell\n",
|
||||
"pip install azureml-sdk[automl,notebooks] matplotlib\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Import the Python packages you need in this tutorial:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -55,7 +77,8 @@
|
||||
"import azureml.core\n",
|
||||
"import pandas as pd\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"import logging"
|
||||
"import logging\n",
|
||||
"import os"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -64,9 +87,11 @@
|
||||
"source": [
|
||||
"### Configure workspace\n",
|
||||
"\n",
|
||||
"Create a workspace object from the existing workspace. A `Workspace` is a class that accepts your Azure subscription and resource information, and creates a cloud resource to monitor and track your model runs. `Workspace.from_config()` reads the file **aml_config/config.json** and loads the details into an object named `ws`. `ws` is used throughout the rest of the code in this tutorial.\n",
|
||||
"Create a workspace object from the existing workspace. A `Workspace` is a class that accepts your Azure subscription and resource information. It also creates a cloud resource to monitor and track your model runs.\n",
|
||||
"\n",
|
||||
"Once you have a workspace object, specify a name for the experiment and create and register a local directory with the workspace. The history of all runs is recorded under the specified experiment."
|
||||
"`Workspace.from_config()` reads the file **aml_config/config.json** and loads the details into an object named `ws`. `ws` is used throughout the rest of the code in this tutorial.\n",
|
||||
"\n",
|
||||
"After you have a workspace object, specify a name for the experiment. Create and register a local directory with the workspace. The history of all runs is recorded under the specified experiment and in the [Azure portal](https://portal.azure.com)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -81,8 +106,6 @@
|
||||
"# project folder\n",
|
||||
"project_folder = './automated-ml-regression'\n",
|
||||
"\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"output = {}\n",
|
||||
"output['SDK version'] = azureml.core.VERSION\n",
|
||||
"output['Subscription ID'] = ws.subscription_id\n",
|
||||
@@ -101,7 +124,7 @@
|
||||
"source": [
|
||||
"## Explore data\n",
|
||||
"\n",
|
||||
"Utilize the data flow object created in the previous tutorial. Open and execute the data flow and review the results."
|
||||
"Use the data flow object created in the previous tutorial. To summarize, part 1 of this tutorial cleaned the NYC Taxi data so it could be used in a machine learning model. Now, you use various features from the data set and allow an automated model to build relationships between the features and the price of a taxi trip. Open and run the data flow and review the results:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -123,7 +146,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You prepare the data for the experiment by adding columns to `dflow_X` to be features for our model creation. You define `dflow_y` to be our prediction value; cost.\n"
|
||||
"You prepare the data for the experiment by adding columns to `dflow_x` to be features for our model creation. You define `dflow_y` to be our prediction value, **cost**:\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -142,7 +165,7 @@
|
||||
"source": [
|
||||
"### Split data into train and test sets\n",
|
||||
"\n",
|
||||
"Now you split the data into training and test sets using the `train_test_split` function in the `sklearn` library. This function segregates the data into the x (features) data set for model training and the y (values to predict) data set for testing. The `test_size` parameter determines the percentage of data to allocate to testing. The `random_state` parameter sets a seed to the random generator, so that your train-test splits are always deterministic."
|
||||
"Now you split the data into training and test sets by using the `train_test_split` function in the `sklearn` library. This function segregates the data into the x, **features**, dataset for model training and the y, **values to predict**, dataset for testing. The `test_size` parameter determines the percentage of data to allocate to testing. The `random_state` parameter sets a seed to the random generator, so that your train-test splits are always deterministic:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -166,28 +189,28 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You now have the necessary packages and data ready for auto training for your model. \n",
|
||||
"The purpose of this step is to have data points to test the finished model that haven't been used to train the model, in order to measure true accuracy. In other words, a well-trained model should be able to accurately make predictions from data it hasn't already seen. You now have the necessary packages and data ready for autotraining your model.\n",
|
||||
"\n",
|
||||
"## Automatically train a model\n",
|
||||
"\n",
|
||||
"To automatically train a model:\n",
|
||||
"1. Define settings for the experiment run\n",
|
||||
"1. Submit the experiment for model tuning\n",
|
||||
"To automatically train a model, take the following steps:\n",
|
||||
"1. Define settings for the experiment run. Attach your training data to the configuration, and modify settings that control the training process.\n",
|
||||
"1. Submit the experiment for model tuning. After submitting the experiment, the process iterates through different machine learning algorithms and hyperparameter settings, adhering to your defined constraints. It chooses the best-fit model by optimizing an accuracy metric.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Define settings for autogeneration and tuning\n",
|
||||
"\n",
|
||||
"Define the experiment parameters and models settings for autogeneration and tuning. View the full list of [settings](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train).\n",
|
||||
"Define the experiment parameters and models settings for autogeneration and tuning. View the full list of [settings](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train). Submitting the experiment with these default settings will take approximately 10-15 min, but if you want a shorter run time, reduce either `iterations` or `iteration_timeout_minutes`.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"|Property| Value in this tutorial |Description|\n",
|
||||
"|----|----|---|\n",
|
||||
"|**iteration_timeout_minutes**|10|Time limit in minutes for each iteration|\n",
|
||||
"|**iterations**|30|Number of iterations. In each iteration, the model trains with the data with a specific pipeline|\n",
|
||||
"|**primary_metric**|spearman_correlation | Metric that you want to optimize.|\n",
|
||||
"|**preprocess**| True | True enables experiment to perform preprocessing on the input.|\n",
|
||||
"|**iteration_timeout_minutes**|10|Time limit in minutes for each iteration. Reduce this value to decrease total runtime.|\n",
|
||||
"|**iterations**|30|Number of iterations. In each iteration, a new machine learning model is trained with your data. This is the primary value that affects total run time.|\n",
|
||||
"|**primary_metric**|spearman_correlation | Metric that you want to optimize. The best-fit model will be chosen based on this metric.|\n",
|
||||
"|**preprocess**| True | By using **True**, the experiment can preprocess the input data (handling missing data, converting text to numeric, etc.)|\n",
|
||||
"|**verbosity**| logging.INFO | Controls the level of logging.|\n",
|
||||
"|**n_cross_validationss**|5|Number of cross validation splits\n"
|
||||
"|**n_cross_validationss**|5| Number of cross-validation splits to perform when validation data is not specified.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -206,6 +229,13 @@
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Use your defined training settings as a parameter to an `AutoMLConfig` object. Additionally, specify your training data and the type of model, which is `regression` in this case."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -233,7 +263,7 @@
|
||||
"source": [
|
||||
"### Train the automatic regression model\n",
|
||||
"\n",
|
||||
"Start the experiment to run locally. Pass the defined `automated_ml_config` object to the experiment, and set the output to `true` to view progress during the experiment."
|
||||
"Start the experiment to run locally. Pass the defined `automated_ml_config` object to the experiment. Set the output to `True` to view progress during the experiment:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -252,6 +282,13 @@
|
||||
"local_run = experiment.submit(automated_ml_config, show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The output shown updates live as the experiment runs. For each iteration, you see the model type, the run duration, and the training accuracy. The field `BEST` tracks the best running training score based on your metric type."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -262,7 +299,7 @@
|
||||
"\n",
|
||||
"### Option 1: Add a Jupyter widget to see results\n",
|
||||
"\n",
|
||||
"Use the Jupyter notebook widget to see a graph and a table of all results."
|
||||
"If you use a Jupyter notebook, use this Jupyter notebook widget to see a graph and a table of all results:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -285,7 +322,7 @@
|
||||
"source": [
|
||||
"### Option 2: Get and examine all run iterations in Python\n",
|
||||
"\n",
|
||||
"Alternatively, you can retrieve the history of each experiment and explore the individual metrics for each iteration run."
|
||||
"You can also retrieve the history of each experiment and explore the individual metrics for each iteration run. By examining RMSE (root_mean_squared_error) for each individual model run, you see that most iterations are predicting the taxi fair cost within a reasonable margin ($3-4).\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -316,7 +353,7 @@
|
||||
"source": [
|
||||
"## Retrieve the best model\n",
|
||||
"\n",
|
||||
"Select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last fit invocation. There are overloads on `get_output` that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration."
|
||||
"Select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last fit invocation. By using the overloads on `get_output`, you can retrieve the best run and fitted model for any logged metric or a particular iteration:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -330,34 +367,13 @@
|
||||
"print(fitted_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Register the model\n",
|
||||
"\n",
|
||||
"Register the model in your Azure Machine Learning Workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"description = 'Automated Machine Learning Model'\n",
|
||||
"tags = None\n",
|
||||
"local_run.register_model(description=description, tags=tags)\n",
|
||||
"print(local_run.model_id) # Use this id to deploy the model as a web service in Azure"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test the best model accuracy\n",
|
||||
"\n",
|
||||
"Use the best model to run predictions on the test data set. The function `predict` uses the best model, and predicts the values of y (trip cost) from the `x_test` data set. Print the first 10 predicted cost values from `y_predict`."
|
||||
"Use the best model to run predictions on the test dataset to predict taxi fares. The function `predict` uses the best model and predicts the values of y, **trip cost**, from the `x_test` dataset. Print the first 10 predicted cost values from `y_predict`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -374,7 +390,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Create a scatter plot to visualize the predicted cost values compared to the actual cost values. The following code uses the `distance` feature as the x-axis, and trip `cost` as the y-axis. The first 100 predicted and actual cost values are created as separate series, in order to compare the variance of predicted cost at each trip distance value. Examining the plot shows that the distance/cost relationship is nearly linear, and the predicted cost values are in most cases very close to the actual cost values for the same trip distance."
|
||||
"Create a scatter plot to visualize the predicted cost values compared to the actual cost values. The following code uses the `distance` feature as the x-axis and trip `cost` as the y-axis. To compare the variance of predicted cost at each trip distance value, the first 100 predicted and actual cost values are created as separate series. Examining the plot shows that the distance/cost relationship is nearly linear, and the predicted cost values are in most cases very close to the actual cost values for the same trip distance."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -407,7 +423,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Calculate the `root mean squared error` of the results. Use the `y_test` dataframe, and convert it to a list to compare to the predicted values. The function `mean_squared_error` takes two arrays of values, and calculates the average squared error between them. Taking the square root of the result gives an error in the same units as the y variable (cost), and indicates roughly how far your predictions are from the actual value. "
|
||||
" Calculate the `root mean squared error` of the results. Use the `y_test` dataframe. Convert it to a list to compare to the predicted values. The function `mean_squared_error` takes two arrays of values and calculates the average squared error between them. Taking the square root of the result gives an error in the same units as the y variable, **cost**. It indicates roughly how far the taxi fare predictions are from the actual fares:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -427,7 +443,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run the following code to calculate MAPE (mean absolute percent error) using the full `y_actual` and `y_predict` data sets. This metric calculates an absolute difference between each predicted and actual value, sums all the differences, and then expresses that sum as a percent of the total of the actual values."
|
||||
"Run the following code to calculate mean absolute percent error (MAPE) by using the full `y_actual` and `y_predict` datasets. This metric calculates an absolute difference between each predicted and actual value and sums all the differences. Then it expresses that sum as a percent of the total of the actual values:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -454,21 +470,46 @@
|
||||
"print(1 - mean_abs_percent_error)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"From the final prediction accuracy metrics, you see that the model is fairly good at predicting taxi fares from the data set's features, typically within +- $3.00. The traditional machine learning model development process is highly resource-intensive, and requires significant domain knowledge and time investment to run and compare the results of dozens of models. Using automated machine learning is a great way to rapidly test many different models for your scenario."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Clean up resources\n",
|
||||
"\n",
|
||||
">The resources you created can be used as prerequisites to other Azure Machine Learning service tutorials and how-to articles. \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"If you don't plan to use the resources you created, delete them, so you don't incur any charges:\n",
|
||||
"\n",
|
||||
"1. In the Azure portal, select **Resource groups** on the far left.\n",
|
||||
"\n",
|
||||
"1. From the list, select the resource group you created.\n",
|
||||
"\n",
|
||||
"1. Select **Delete resource group**.\n",
|
||||
"\n",
|
||||
"1. Enter the resource group name. Then select **Delete**."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Next steps\n",
|
||||
"\n",
|
||||
"In this automated machine learning tutorial, you:\n",
|
||||
"In this automated machine learning tutorial, you did the following tasks:\n",
|
||||
"\n",
|
||||
"* Configured a workspace and prepared data for an experiment.\n",
|
||||
"* Trained by using an automated regression model locally with custom parameters.\n",
|
||||
"* Explored and reviewed training results.\n",
|
||||
"\n",
|
||||
"> * Configured a workspace and prepared data for an experiment\n",
|
||||
"> * Trained using an automated regression model locally with custom parameters\n",
|
||||
"> * Explored and reviewed training results\n",
|
||||
"> * Registered the best model\n",
|
||||
"\n",
|
||||
"You can also try out the [image classification tutorial](img-classification-part1-training.ipynb)."
|
||||
"[Deploy your model](https://docs.microsoft.com/azure/machine-learning/service/tutorial-deploy-models-with-aml) with Azure Machine Learning."
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
Reference in New Issue
Block a user