mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 17:45:10 -05:00
Compare commits
14 Commits
azureml-sd
...
azureml-sd
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e1a948f4cd | ||
|
|
3ca40c0817 | ||
|
|
f724cb4d9b | ||
|
|
094b4b3b13 | ||
|
|
d09942f521 | ||
|
|
0c9e527174 | ||
|
|
e2640e54da | ||
|
|
d348baf8a1 | ||
|
|
b41e11e30d | ||
|
|
c1aa951867 | ||
|
|
5fe5f06e07 | ||
|
|
e8a09c49b1 | ||
|
|
fb6a73a790 | ||
|
|
c2968b6526 |
50
README.md
50
README.md
@@ -1,36 +1,34 @@
|
|||||||
# Azure Machine Learning service sample notebooks
|
# Azure Machine Learning service example notebooks
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK
|
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK
|
||||||
which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK
|
which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK
|
||||||
allows you the choice of using local or cloud compute resources, while managing
|
allows you the choice of using local or cloud compute resources, while managing
|
||||||
and maintaining the complete data science workflow from the cloud.
|
and maintaining the complete data science workflow from the cloud.
|
||||||
|
|
||||||
* Read [instructions on setting up notebooks](./NBSETUP.md) to run these notebooks.
|

|
||||||
|
|
||||||
* Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
|
## How to use and navigate the example notebooks?
|
||||||
|
|
||||||
## Getting Started
|
You can set up you own Python environment or use Azure Notebooks with Azure ML SDK pre-installed. Read [these instructions](./NBSETUP.md) to set up your environment and clone the example notebooks.
|
||||||
|
|
||||||
These examples will provide you with an effective way to get started using AML. Once you're familiar with
|
You should always run the [Configuration](./configuration.ipynb) notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.
|
||||||
some of the capabilities, explore the repository for specific topics.
|
|
||||||
|
|
||||||
- [Configuration](./configuration.ipynb) configures your notebook library to easily connect to an
|
If you want to...
|
||||||
Azure Machine Learning workspace, and sets up your workspace to be used by many of the other examples. You should
|
|
||||||
always run this first when setting up a notebook library on a new machine or in a new environment
|
* ...try out and explore Azure ML, start with image classification tutorials [part 1 training](./tutorials/img-classification-part1-training.ipynb) and [part 2 deployment](./tutorials/img-classification-part2-deploy.ipynb).
|
||||||
- [Train in notebook](./how-to-use-azureml/training/train-within-notebook) shows how to create a model directly in a notebook while recording
|
* ...learn about experimentation and tracking run history, first [train within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) and [using logging APIs](./how-to-use-azureml/training/logging-api/logging-api.ipynb).
|
||||||
metrics and deploy that model to a test service
|
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
|
||||||
- [Train on remote](./how-to-use-azureml/training/train-on-remote-vm) takes the previous example and shows how to create the model on a cloud compute target
|
* ...deploy model as realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [register and manage models, and create Docker images](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), and [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
|
||||||
- [Production deploy to AKS](./how-to-use-azureml/deployment/production-deploy-to-aks) shows how to create a production grade inferencing webservice
|
* ...deploy models as batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), learn how to [register and manage models](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](./how-to-use-azureml/machine-learning-pipelines/pipeline-mpi-batch-prediction.ipynb).
|
||||||
|
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) and [model data collection](./how-to-use-azureml/deployment/enable-data-collection-for-models-in-aks/enable-data-collection-for-models-in-aks.ipynb).
|
||||||
|
|
||||||
## Tutorials
|
## Tutorials
|
||||||
|
|
||||||
The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs)
|
The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs)
|
||||||
|
|
||||||
## How to use AML
|
## How to use Azure ML
|
||||||
|
|
||||||
The [How to use AML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
|
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
|
||||||
|
|
||||||
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets.
|
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets.
|
||||||
- [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
|
- [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
|
||||||
@@ -38,3 +36,21 @@ The [How to use AML](./how-to-use-azureml) folder contains specific examples dem
|
|||||||
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
|
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
|
||||||
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
|
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
|
||||||
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
|
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
|
||||||
|
|
||||||
|
---
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
* Quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
|
||||||
|
|
||||||
|
* [Python SDK reference]( https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py)
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Projects using Azure Machine Learning
|
||||||
|
|
||||||
|
Visit following repos to see projects contributed by Azure ML users:
|
||||||
|
|
||||||
|
- [Fine tune natural language processing models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
|
||||||
|
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
|
||||||
|
|
||||||
@@ -35,7 +35,7 @@ Below are the three execution environments supported by AutoML.
|
|||||||
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
|
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
|
||||||
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl_databricks]** as a PyPi library in Azure Databricks workspace.
|
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl_databricks]** as a PyPi library in Azure Databricks workspace.
|
||||||
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks).
|
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks).
|
||||||
- Download the sample notebook AutoML_Databricks_local_06.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace.
|
- Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace.
|
||||||
- Attach the notebook to the cluster.
|
- Attach the notebook to the cluster.
|
||||||
|
|
||||||
<a name="localconda"></a>
|
<a name="localconda"></a>
|
||||||
@@ -175,10 +175,6 @@ bash automl_setup_linux.sh
|
|||||||
- [auto-ml-dataprep-remote-execution.ipynb](dataprep-remote-execution/auto-ml-dataprep-remote-execution.ipynb)
|
- [auto-ml-dataprep-remote-execution.ipynb](dataprep-remote-execution/auto-ml-dataprep-remote-execution.ipynb)
|
||||||
- Using DataPrep for reading data with remote execution
|
- Using DataPrep for reading data with remote execution
|
||||||
|
|
||||||
- [auto-ml-classification-local-azuredatabricks.ipynb](classification-local-azuredatabricks/auto-ml-classification-local-azuredatabricks.ipynb)
|
|
||||||
- Dataset: scikit learn's [digit dataset](https://innovate.burningman.org/datasets-page/)
|
|
||||||
- Example of using AutoML for classification using Azure Databricks as the platform for training
|
|
||||||
|
|
||||||
- [auto-ml-classification-with-whitelisting.ipynb](classification-with-whitelisting/auto-ml-classification-with-whitelisting.ipynb)
|
- [auto-ml-classification-with-whitelisting.ipynb](classification-with-whitelisting/auto-ml-classification-with-whitelisting.ipynb)
|
||||||
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
|
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
|
||||||
- Simple example of using Auto ML for classification with whitelisting tensorflow models.
|
- Simple example of using Auto ML for classification with whitelisting tensorflow models.
|
||||||
|
|||||||
@@ -23,6 +23,10 @@ if errorlevel 1 goto ErrorExit
|
|||||||
|
|
||||||
call python -m ipykernel install --user --name %conda_env_name% --display-name "Python (%conda_env_name%)"
|
call python -m ipykernel install --user --name %conda_env_name% --display-name "Python (%conda_env_name%)"
|
||||||
|
|
||||||
|
REM azureml.widgets is now installed as part of the pip install under the conda env.
|
||||||
|
REM Removing the old user install so that the notebooks will use the latest widget.
|
||||||
|
call jupyter nbextension uninstall --user --py azureml.widgets
|
||||||
|
|
||||||
echo.
|
echo.
|
||||||
echo.
|
echo.
|
||||||
echo ***************************************
|
echo ***************************************
|
||||||
|
|||||||
@@ -22,11 +22,13 @@ fi
|
|||||||
if source activate $CONDA_ENV_NAME 2> /dev/null
|
if source activate $CONDA_ENV_NAME 2> /dev/null
|
||||||
then
|
then
|
||||||
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
|
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
|
||||||
pip install --upgrade azureml-sdk[automl,notebooks,explain]
|
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
|
||||||
|
jupyter nbextension uninstall --user --py azureml.widgets
|
||||||
else
|
else
|
||||||
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
|
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
|
||||||
source activate $CONDA_ENV_NAME &&
|
source activate $CONDA_ENV_NAME &&
|
||||||
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
|
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
|
||||||
|
jupyter nbextension uninstall --user --py azureml.widgets &&
|
||||||
echo "" &&
|
echo "" &&
|
||||||
echo "" &&
|
echo "" &&
|
||||||
echo "***************************************" &&
|
echo "***************************************" &&
|
||||||
|
|||||||
@@ -22,13 +22,15 @@ fi
|
|||||||
if source activate $CONDA_ENV_NAME 2> /dev/null
|
if source activate $CONDA_ENV_NAME 2> /dev/null
|
||||||
then
|
then
|
||||||
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
|
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
|
||||||
pip install --upgrade azureml-sdk[automl,notebooks,explain]
|
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
|
||||||
|
jupyter nbextension uninstall --user --py azureml.widgets
|
||||||
else
|
else
|
||||||
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
|
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
|
||||||
source activate $CONDA_ENV_NAME &&
|
source activate $CONDA_ENV_NAME &&
|
||||||
conda install lightgbm -c conda-forge -y &&
|
conda install lightgbm -c conda-forge -y &&
|
||||||
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
|
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
|
||||||
pip install numpy==1.15.3
|
jupyter nbextension uninstall --user --py azureml.widgets &&
|
||||||
|
pip install numpy==1.15.3 &&
|
||||||
echo "" &&
|
echo "" &&
|
||||||
echo "" &&
|
echo "" &&
|
||||||
echo "***************************************" &&
|
echo "***************************************" &&
|
||||||
|
|||||||
@@ -62,11 +62,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import json\n",
|
"import json\n",
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
"from sklearn import datasets\n",
|
||||||
@@ -102,7 +99,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data=output, index=['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -228,7 +226,8 @@
|
|||||||
"description = 'AutoML Model'\n",
|
"description = 'AutoML Model'\n",
|
||||||
"tags = None\n",
|
"tags = None\n",
|
||||||
"model = local_run.register_model(description = description, tags = tags)\n",
|
"model = local_run.register_model(description = description, tags = tags)\n",
|
||||||
"local_run.model_id # This will be written to the script file later in the notebook."
|
"\n",
|
||||||
|
"print(local_run.model_id) # This will be written to the script file later in the notebook."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -61,11 +61,8 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
"from sklearn import datasets\n",
|
||||||
@@ -73,8 +70,7 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -100,7 +96,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -135,8 +132,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from sklearn import datasets\n",
|
|
||||||
"\n",
|
|
||||||
"digits = datasets.load_digits()\n",
|
"digits = datasets.load_digits()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Exclude the first 100 rows from training so that they can be used for test.\n",
|
"# Exclude the first 100 rows from training so that they can be used for test.\n",
|
||||||
|
|||||||
@@ -60,11 +60,8 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
"from sklearn import datasets\n",
|
||||||
@@ -72,8 +69,7 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -99,7 +95,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -134,8 +131,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from sklearn import datasets\n",
|
|
||||||
"\n",
|
|
||||||
"digits = datasets.load_digits()\n",
|
"digits = datasets.load_digits()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Exclude the first 100 rows from training so that they can be used for test.\n",
|
"# Exclude the first 100 rows from training so that they can be used for test.\n",
|
||||||
|
|||||||
@@ -1,154 +0,0 @@
|
|||||||
{
|
|
||||||
"cells": [
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
||||||
"\n",
|
|
||||||
"Licensed under the MIT License."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# Automated Machine Learning Configuration\n",
|
|
||||||
"\n",
|
|
||||||
"In this example you will create an Azure Machine Learning `Workspace` object and initialize your notebook directory to easily reload this object from a configuration file. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Check the Azure ML Core SDK Version to Validate Your Installation"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import azureml.core\n",
|
|
||||||
"\n",
|
|
||||||
"print(\"SDK Version:\", azureml.core.VERSION)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Initialize an Azure ML Workspace\n",
|
|
||||||
"### What is an Azure ML Workspace and Why Do I Need One?\n",
|
|
||||||
"\n",
|
|
||||||
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
|
|
||||||
"\n",
|
|
||||||
"\n",
|
|
||||||
"### What do I Need?\n",
|
|
||||||
"\n",
|
|
||||||
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
|
|
||||||
"* A name for your workspace. You can choose one.\n",
|
|
||||||
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
|
|
||||||
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
|
|
||||||
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"subscription_id = \"<subscription_id>\"\n",
|
|
||||||
"resource_group = \"myrg\"\n",
|
|
||||||
"workspace_name = \"myws\"\n",
|
|
||||||
"workspace_region = \"eastus2\""
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Creating a Workspace\n",
|
|
||||||
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
|
|
||||||
"\n",
|
|
||||||
"This will fail when:\n",
|
|
||||||
"1. The workspace already exists.\n",
|
|
||||||
"2. You do not have permission to create a workspace in the resource group.\n",
|
|
||||||
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
|
|
||||||
"\n",
|
|
||||||
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
|
|
||||||
"\n",
|
|
||||||
"**Note:** Creation of a new workspace can take several minutes."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Import the Workspace class and check the Azure ML SDK version.\n",
|
|
||||||
"from azureml.core import Workspace\n",
|
|
||||||
"\n",
|
|
||||||
"ws = Workspace.create(name = workspace_name,\n",
|
|
||||||
" subscription_id = subscription_id,\n",
|
|
||||||
" resource_group = resource_group, \n",
|
|
||||||
" location = workspace_region)\n",
|
|
||||||
"ws.get_details()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Configuring Your Local Environment\n",
|
|
||||||
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core import Workspace\n",
|
|
||||||
"\n",
|
|
||||||
"ws = Workspace(workspace_name = workspace_name,\n",
|
|
||||||
" subscription_id = subscription_id,\n",
|
|
||||||
" resource_group = resource_group)\n",
|
|
||||||
"\n",
|
|
||||||
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
|
||||||
"ws.write_config()"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"authors": [
|
|
||||||
{
|
|
||||||
"name": "savitam"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "Python 3.6",
|
|
||||||
"language": "python",
|
|
||||||
"name": "python36"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.6.6"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 2
|
|
||||||
}
|
|
||||||
@@ -80,7 +80,6 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"import time\n",
|
"import time\n",
|
||||||
"\n",
|
"\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
@@ -117,7 +116,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -323,7 +323,6 @@
|
|||||||
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
|
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
|
||||||
" metricslist[int(properties['iteration'])] = metrics\n",
|
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||||
" \n",
|
" \n",
|
||||||
"import pandas as pd\n",
|
|
||||||
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
||||||
"rundata"
|
"rundata"
|
||||||
]
|
]
|
||||||
@@ -427,8 +426,6 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"#Randomly select digits and test\n",
|
"#Randomly select digits and test\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import random\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"\n",
|
"\n",
|
||||||
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
|
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
|
||||||
@@ -482,7 +479,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"digits_complete.to_pandas_dataframe().shape\n",
|
"print(digits_complete.to_pandas_dataframe().shape)\n",
|
||||||
"labels_column = 'Column64'\n",
|
"labels_column = 'Column64'\n",
|
||||||
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
|
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
|
||||||
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"
|
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"
|
||||||
|
|||||||
@@ -80,7 +80,6 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -115,7 +114,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -274,7 +274,6 @@
|
|||||||
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
|
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
|
||||||
" metricslist[int(properties['iteration'])] = metrics\n",
|
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||||
" \n",
|
" \n",
|
||||||
"import pandas as pd\n",
|
|
||||||
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
||||||
"rundata"
|
"rundata"
|
||||||
]
|
]
|
||||||
@@ -378,8 +377,6 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"#Randomly select digits and test\n",
|
"#Randomly select digits and test\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import random\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"\n",
|
"\n",
|
||||||
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
|
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
|
||||||
@@ -433,7 +430,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"digits_complete.to_pandas_dataframe().shape\n",
|
"print(digits_complete.to_pandas_dataframe().shape)\n",
|
||||||
"labels_column = 'Column64'\n",
|
"labels_column = 'Column64'\n",
|
||||||
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
|
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
|
||||||
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"
|
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"
|
||||||
|
|||||||
@@ -53,22 +53,11 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
|
||||||
"import os\n",
|
|
||||||
"import random\n",
|
|
||||||
"import re\n",
|
|
||||||
"\n",
|
|
||||||
"from matplotlib import pyplot as plt\n",
|
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
"import json\n",
|
||||||
"\n",
|
"\n",
|
||||||
"import azureml.core\n",
|
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.run import Run\n",
|
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
"from azureml.train.automl.run import AutoMLRun"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -152,7 +141,7 @@
|
|||||||
"for run in automl_runs:\n",
|
"for run in automl_runs:\n",
|
||||||
" properties = run.get_properties()\n",
|
" properties = run.get_properties()\n",
|
||||||
" tags = run.get_tags()\n",
|
" tags = run.get_tags()\n",
|
||||||
" amlsettings = eval(properties['RawAMLSettingsString'])\n",
|
" amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
|
||||||
" if 'iterations' in tags:\n",
|
" if 'iterations' in tags:\n",
|
||||||
" iterations = tags['iterations']\n",
|
" iterations = tags['iterations']\n",
|
||||||
" else:\n",
|
" else:\n",
|
||||||
@@ -196,7 +185,7 @@
|
|||||||
"properties = ml_run.get_properties()\n",
|
"properties = ml_run.get_properties()\n",
|
||||||
"tags = ml_run.get_tags()\n",
|
"tags = ml_run.get_tags()\n",
|
||||||
"status = ml_run.get_details()\n",
|
"status = ml_run.get_details()\n",
|
||||||
"amlsettings = eval(properties['RawAMLSettingsString'])\n",
|
"amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
|
||||||
"if 'iterations' in tags:\n",
|
"if 'iterations' in tags:\n",
|
||||||
" iterations = tags['iterations']\n",
|
" iterations = tags['iterations']\n",
|
||||||
"else:\n",
|
"else:\n",
|
||||||
@@ -297,7 +286,7 @@
|
|||||||
"description = 'AutoML Model'\n",
|
"description = 'AutoML Model'\n",
|
||||||
"tags = None\n",
|
"tags = None\n",
|
||||||
"ml_run.register_model(description = description, tags = tags)\n",
|
"ml_run.register_model(description = description, tags = tags)\n",
|
||||||
"ml_run.model_id # Use this id to deploy the model as a web service in Azure."
|
"print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -58,7 +58,6 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import os\n",
|
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import warnings\n",
|
"import warnings\n",
|
||||||
"# Squash warning messages for cleaner output in the notebook\n",
|
"# Squash warning messages for cleaner output in the notebook\n",
|
||||||
@@ -68,9 +67,7 @@
|
|||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig\n",
|
||||||
"from azureml.train.automl.run import AutoMLRun\n",
|
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score"
|
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -98,7 +95,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Run History Name'] = experiment_name\n",
|
"output['Run History Name'] = experiment_name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data=output, index=['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -289,61 +287,6 @@
|
|||||||
"y_pred"
|
"y_pred"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Define a Check Data Function\n",
|
|
||||||
"\n",
|
|
||||||
"Remove the nan values from y_test to avoid error when calculate metrics "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"def _check_calc_input(y_true, y_pred, rm_na=True):\n",
|
|
||||||
" \"\"\"\n",
|
|
||||||
" Check that 'y_true' and 'y_pred' are non-empty and\n",
|
|
||||||
" have equal length.\n",
|
|
||||||
"\n",
|
|
||||||
" :param y_true: Vector of actual values\n",
|
|
||||||
" :type y_true: array-like\n",
|
|
||||||
"\n",
|
|
||||||
" :param y_pred: Vector of predicted values\n",
|
|
||||||
" :type y_pred: array-like\n",
|
|
||||||
"\n",
|
|
||||||
" :param rm_na:\n",
|
|
||||||
" If rm_na=True, remove entries where y_true=NA and y_pred=NA.\n",
|
|
||||||
" :type rm_na: boolean\n",
|
|
||||||
"\n",
|
|
||||||
" :return:\n",
|
|
||||||
" Tuple (y_true, y_pred). if rm_na=True,\n",
|
|
||||||
" the returned vectors may differ from their input values.\n",
|
|
||||||
" :rtype: Tuple with 2 entries\n",
|
|
||||||
" \"\"\"\n",
|
|
||||||
" if len(y_true) != len(y_pred):\n",
|
|
||||||
" raise ValueError(\n",
|
|
||||||
" 'the true values and prediction values do not have equal length.')\n",
|
|
||||||
" elif len(y_true) == 0:\n",
|
|
||||||
" raise ValueError(\n",
|
|
||||||
" 'y_true and y_pred are empty.')\n",
|
|
||||||
" # if there is any non-numeric element in the y_true or y_pred,\n",
|
|
||||||
" # the ValueError exception will be thrown.\n",
|
|
||||||
" y_true = np.array(y_true).astype(float)\n",
|
|
||||||
" y_pred = np.array(y_pred).astype(float)\n",
|
|
||||||
" if rm_na:\n",
|
|
||||||
" # remove entries both in y_true and y_pred where at least\n",
|
|
||||||
" # one element in y_true or y_pred is missing\n",
|
|
||||||
" y_true_rm_na = y_true[~(np.isnan(y_true) | np.isnan(y_pred))]\n",
|
|
||||||
" y_pred_rm_na = y_pred[~(np.isnan(y_true) | np.isnan(y_pred))]\n",
|
|
||||||
" return (y_true_rm_na, y_pred_rm_na)\n",
|
|
||||||
" else:\n",
|
|
||||||
" return y_true, y_pred"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -357,7 +300,22 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"y_test,y_pred = _check_calc_input(y_test,y_pred)"
|
"if len(y_test) != len(y_pred):\n",
|
||||||
|
" raise ValueError(\n",
|
||||||
|
" 'the true values and prediction values do not have equal length.')\n",
|
||||||
|
"elif len(y_test) == 0:\n",
|
||||||
|
" raise ValueError(\n",
|
||||||
|
" 'y_true and y_pred are empty.')\n",
|
||||||
|
"\n",
|
||||||
|
"# if there is any non-numeric element in the y_true or y_pred,\n",
|
||||||
|
"# the ValueError exception will be thrown.\n",
|
||||||
|
"y_test_f = np.array(y_test).astype(float)\n",
|
||||||
|
"y_pred_f = np.array(y_pred).astype(float)\n",
|
||||||
|
"\n",
|
||||||
|
"# remove entries both in y_true and y_pred where at least\n",
|
||||||
|
"# one element in y_true or y_pred is missing\n",
|
||||||
|
"y_test = y_test_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))]\n",
|
||||||
|
"y_pred = y_pred_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))]"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -59,7 +59,6 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import os\n",
|
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import warnings\n",
|
"import warnings\n",
|
||||||
"# Squash warning messages for cleaner output in the notebook\n",
|
"# Squash warning messages for cleaner output in the notebook\n",
|
||||||
@@ -69,7 +68,6 @@
|
|||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig\n",
|
||||||
"from azureml.train.automl.run import AutoMLRun\n",
|
|
||||||
"from sklearn.metrics import mean_absolute_error, mean_squared_error"
|
"from sklearn.metrics import mean_absolute_error, mean_squared_error"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -97,7 +95,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Run History Name'] = experiment_name\n",
|
"output['Run History Name'] = experiment_name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data=output, index=['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -63,11 +63,8 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
"from sklearn import datasets\n",
|
||||||
@@ -75,8 +72,7 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -102,7 +98,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data=output, index=['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -135,8 +132,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from scipy import sparse\n",
|
|
||||||
"\n",
|
|
||||||
"digits = datasets.load_digits()\n",
|
"digits = datasets.load_digits()\n",
|
||||||
"X_train = digits.data[10:,:]\n",
|
"X_train = digits.data[10:,:]\n",
|
||||||
"y_train = digits.target[10:]\n",
|
"y_train = digits.target[10:]\n",
|
||||||
|
|||||||
@@ -57,15 +57,12 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -92,7 +89,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -58,20 +58,15 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -97,7 +92,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -354,9 +350,6 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"%matplotlib inline\n",
|
"%matplotlib inline\n",
|
||||||
"import matplotlib.pyplot as plt\n",
|
|
||||||
"import numpy as np\n",
|
|
||||||
"from sklearn import datasets\n",
|
|
||||||
"from sklearn.metrics import mean_squared_error, r2_score\n",
|
"from sklearn.metrics import mean_squared_error, r2_score\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Set up a multi-plot chart.\n",
|
"# Set up a multi-plot chart.\n",
|
||||||
@@ -375,8 +368,8 @@
|
|||||||
"a0.set_ylabel('Residual Values', fontsize = 12)\n",
|
"a0.set_ylabel('Residual Values', fontsize = 12)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Plot a histogram.\n",
|
"# Plot a histogram.\n",
|
||||||
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step');\n",
|
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step')\n",
|
||||||
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10);\n",
|
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Plot residual values of test set.\n",
|
"# Plot residual values of test set.\n",
|
||||||
"a1.axis([0, 90, -200, 200])\n",
|
"a1.axis([0, 90, -200, 200])\n",
|
||||||
|
|||||||
@@ -66,21 +66,15 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
|
||||||
"import os\n",
|
"import os\n",
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -106,7 +100,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data=output, index=['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -67,10 +67,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
"import os\n",
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
"from sklearn import datasets\n",
|
||||||
@@ -78,8 +76,7 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -105,7 +102,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -170,7 +168,7 @@
|
|||||||
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
|
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
|
||||||
" compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
|
" compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
|
||||||
" \n",
|
" \n",
|
||||||
" # For a more detailed view of current AmlCompute status, use the 'status' property."
|
" # For a more detailed view of current AmlCompute status, use get_status()."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -59,21 +59,16 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
"import os\n",
|
||||||
"import random\n",
|
|
||||||
"import time\n",
|
"import time\n",
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.compute import DsvmCompute\n",
|
"from azureml.core.compute import DsvmCompute\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -100,7 +95,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data=output, index=['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -169,7 +165,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"mkdir data"
|
"if not os.path.isdir('data'):\n",
|
||||||
|
" os.mkdir('data') "
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -218,7 +215,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.core import Workspace, Datastore\n",
|
|
||||||
"#blob_datastore = Datastore(ws, blob_datastore_name)\n",
|
"#blob_datastore = Datastore(ws, blob_datastore_name)\n",
|
||||||
"ds = ws.get_default_datastore()\n",
|
"ds = ws.get_default_datastore()\n",
|
||||||
"print(ds.datastore_type, ds.account_name, ds.container_name)"
|
"print(ds.datastore_type, ds.account_name, ds.container_name)"
|
||||||
|
|||||||
@@ -67,11 +67,9 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
"import os\n",
|
||||||
"import random\n",
|
|
||||||
"import time\n",
|
"import time\n",
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
"from sklearn import datasets\n",
|
||||||
@@ -79,8 +77,7 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -106,7 +103,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -51,11 +51,8 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
"from matplotlib import pyplot as plt\n",
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
"from sklearn import datasets\n",
|
||||||
@@ -63,8 +60,7 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -93,7 +89,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -61,20 +61,13 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import logging\n",
|
"import logging\n",
|
||||||
"import os\n",
|
|
||||||
"import random\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"from matplotlib import pyplot as plt\n",
|
|
||||||
"from matplotlib.pyplot import imshow\n",
|
|
||||||
"import numpy as np\n",
|
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from sklearn import datasets\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
"from azureml.train.automl.run import AutoMLRun"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -101,7 +94,8 @@
|
|||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data=output, index=['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
BIN
how-to-use-azureml/azure-databricks/Databricks_AMLSDK_1-4_6.dbc
Normal file
BIN
how-to-use-azureml/azure-databricks/Databricks_AMLSDK_1-4_6.dbc
Normal file
Binary file not shown.
@@ -1,20 +1,21 @@
|
|||||||
Azure Databricks is a managed Spark offering on Azure and customers already use it for advanced analytics. It provides a collaborative Notebook based environment with CPU or GPU based compute cluster.
|
Azure Databricks is a managed Spark offering on Azure and customers already use it for advanced analytics. It provides a collaborative Notebook based environment with CPU or GPU based compute cluster.
|
||||||
|
|
||||||
In this section, you will see sample notebooks on how to use Azure Machine Learning SDK with Azure Databricks. You can train a model using Spark MLlib and then deploy the model to ACI/AKS from within Azure Databricks. You can also use Automated ML capability (**public preview**) of Azure ML SDK with Azure Databricks.
|
In this section, you will find sample notebooks on how to use Azure Machine Learning SDK with Azure Databricks. You can train a model using Spark MLlib and then deploy the model to ACI/AKS from within Azure Databricks. You can also use Automated ML capability (**public preview**) of Azure ML SDK with Azure Databricks.
|
||||||
|
|
||||||
- Customers who use Azure Databricks for advanced analytics can now use the same cluster to run experiments with or without automated machine learning.
|
- Customers who use Azure Databricks for advanced analytics can now use the same cluster to run experiments with or without automated machine learning.
|
||||||
- You can keep the data within the same cluster.
|
- You can keep the data within the same cluster.
|
||||||
- You can leverage the local worker nodes with autoscale and auto termination capabilities.
|
- You can leverage the local worker nodes with autoscale and auto termination capabilities.
|
||||||
- You can use multiple cores of your Azure Databricks cluster to perform simultenous training.
|
- You can use multiple cores of your Azure Databricks cluster to perform simultenous training.
|
||||||
- You can further tune the model generated by automated machine learning if you chose to.
|
- You can further tune the model generated by automated machine learning if you chose to.
|
||||||
- Every run (including the best run) is available as a pipeline.
|
- Every run (including the best run) is available as a pipeline, which you can tune further if needed.
|
||||||
- The model trained using Azure Databricks can be registered in Azure ML SDK workspace and then deployed to Azure managed compute (ACI or AKS) using the Azure Machine learning SDK.
|
- The model trained using Azure Databricks can be registered in Azure ML SDK workspace and then deployed to Azure managed compute (ACI or AKS) using the Azure Machine learning SDK.
|
||||||
|
|
||||||
|
|
||||||
**Create Azure Databricks Cluster:**
|
**Create Azure Databricks Cluster:**
|
||||||
|
|
||||||
Select New Cluster and fill in following detail:
|
Select New Cluster and fill in following detail:
|
||||||
- Cluster name: _yourclustername_
|
- Cluster name: _yourclustername_
|
||||||
- Databricks Runtime: Any 4.x runtime.
|
- Databricks Runtime: Any **non ML** runtime (non ML 4.x, 5.x)
|
||||||
- Python version: **3**
|
- Python version: **3**
|
||||||
- Workers: 2 or higher.
|
- Workers: 2 or higher.
|
||||||
|
|
||||||
@@ -46,7 +47,7 @@ It will take few minutes to create the cluster. Please ensure that the cluster s
|
|||||||
|
|
||||||
- Click Install Library
|
- Click Install Library
|
||||||
|
|
||||||
- Do not select _Attach automatically to all clusters_. In case you have selected earlier then you can go to your Home folder and deselect it.
|
- Do not select _Attach automatically to all clusters_. In case you selected this earlier, please go to your Home folder and deselect it.
|
||||||
|
|
||||||
- Select the check box _Attach_ next to your cluster name
|
- Select the check box _Attach_ next to your cluster name
|
||||||
|
|
||||||
@@ -54,17 +55,17 @@ It will take few minutes to create the cluster. Please ensure that the cluster s
|
|||||||
|
|
||||||
- Ensure that there are no errors until Status changes to _Attached_. It may take a couple of minutes.
|
- Ensure that there are no errors until Status changes to _Attached_. It may take a couple of minutes.
|
||||||
|
|
||||||
**Note** - If you have the old build the please deselect it from cluster’s installed libs > move to trash. Install the new build and restart the cluster. And if still there is an issue then detach and reattach your cluster.
|
**Note** - If you have an old SDK version, please deselect it from cluster’s installed libs > move to trash. Install the new SDK verdion and restart the cluster. If there is an issue after this, please detach and reattach your cluster.
|
||||||
|
|
||||||
iPython Notebooks 1-4 have to be run sequentially after making changes based on your subscription. The corresponding DBC archive contains all the notebooks and can be imported into your Databricks workspace. You can the run notebooks after importing [databricks_amlsdk](Databricks_AMLSDK_1-4_6.dbc) instead of downloading individually.
|
**Single file** -
|
||||||
|
The following archive contains all the sample notebooks. You can the run notebooks after importing [DBC](Databricks_AMLSDK_1-4_6.dbc) in your Databricks workspace instead of downloading individually.
|
||||||
|
|
||||||
Notebooks 1-4 are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks. Notebook 6 is an Automated ML sample notebook.
|
Notebooks 1-4 have to be run sequentially & are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks.
|
||||||
|
|
||||||
For details on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks).
|
Notebook 6 is an Automated ML sample notebook for Classification.
|
||||||
|
|
||||||
Learn more about [how to use Azure Databricks as a development environment](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment#azure-databricks) for Azure Machine Learning service.
|
Learn more about [how to use Azure Databricks as a development environment](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment#azure-databricks) for Azure Machine Learning service.
|
||||||
|
|
||||||
You can also use Azure Databricks as a compute target for [training models with an Azure Machine Learning pipeline](https://docs.microsoft.com/machine-learning/service/how-to-set-up-training-targets#databricks).
|
For more on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks).
|
||||||
|
|
||||||
|
|
||||||
**Please let us know your feedback.**
|
**Please let us know your feedback.**
|
||||||
@@ -54,21 +54,6 @@
|
|||||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##TESTONLY\n",
|
|
||||||
"# import auth creds from notebook parameters\n",
|
|
||||||
"tenant = dbutils.widgets.get('tenant_id')\n",
|
|
||||||
"username = dbutils.widgets.get('service_principal_id')\n",
|
|
||||||
"password = dbutils.widgets.get('service_principal_password')\n",
|
|
||||||
"\n",
|
|
||||||
"auth = azureml.core.authentication.ServicePrincipalAuthentication(tenant, username, password)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
@@ -91,15 +76,14 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"##PUBLISHONLY\n",
|
"# import the Workspace class and check the azureml SDK version\n",
|
||||||
"## import the Workspace class and check the azureml SDK version\n",
|
"from azureml.core import Workspace\n",
|
||||||
"#from azureml.core import Workspace\n",
|
"\n",
|
||||||
"#\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"#ws = Workspace.from_config()\n",
|
"print('Workspace name: ' + ws.name, \n",
|
||||||
"#print('Workspace name: ' + ws.name, \n",
|
" 'Azure region: ' + ws.location, \n",
|
||||||
"# 'Azure region: ' + ws.location, \n",
|
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||||
"# 'Subscription id: ' + ws.subscription_id, \n",
|
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||||
"# 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -372,9 +356,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -39,21 +39,6 @@
|
|||||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##TESTONLY\n",
|
|
||||||
"# import auth creds from notebook parameters\n",
|
|
||||||
"tenant = dbutils.widgets.get('tenant_id')\n",
|
|
||||||
"username = dbutils.widgets.get('service_principal_id')\n",
|
|
||||||
"password = dbutils.widgets.get('service_principal_password')\n",
|
|
||||||
"\n",
|
|
||||||
"auth = azureml.core.authentication.ServicePrincipalAuthentication(tenant, username, password)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
@@ -77,20 +62,19 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"##PUBLISHONLY\n",
|
"from azureml.core import Workspace\n",
|
||||||
"#from azureml.core import Workspace\n",
|
"import azureml.core\n",
|
||||||
"#import azureml.core\n",
|
"\n",
|
||||||
"#\n",
|
"# Check core SDK version number\n",
|
||||||
"## Check core SDK version number\n",
|
"print(\"SDK version:\", azureml.core.VERSION)\n",
|
||||||
"#print(\"SDK version:\", azureml.core.VERSION)\n",
|
"\n",
|
||||||
"#\n",
|
"#'''\n",
|
||||||
"##'''\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"#ws = Workspace.from_config()\n",
|
"print('Workspace name: ' + ws.name, \n",
|
||||||
"#print('Workspace name: ' + ws.name, \n",
|
" 'Azure region: ' + ws.location, \n",
|
||||||
"# 'Azure region: ' + ws.location, \n",
|
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||||
"# 'Subscription id: ' + ws.subscription_id, \n",
|
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
|
||||||
"# 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
|
"#'''"
|
||||||
"##'''"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -330,9 +314,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -158,9 +158,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -73,35 +73,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"##TESTONLY\n",
|
|
||||||
"# import auth creds from notebook parameters\n",
|
|
||||||
"tenant = dbutils.widgets.get('tenant_id')\n",
|
|
||||||
"username = dbutils.widgets.get('service_principal_id')\n",
|
|
||||||
"password = dbutils.widgets.get('service_principal_password')\n",
|
|
||||||
"\n",
|
|
||||||
"auth = azureml.core.authentication.ServicePrincipalAuthentication(tenant, username, password)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##TESTONLY\n",
|
|
||||||
"subscription_id = dbutils.widgets.get('subscription_id')\n",
|
|
||||||
"resource_group = dbutils.widgets.get('resource_group')\n",
|
|
||||||
"workspace_name = dbutils.widgets.get('workspace_name')\n",
|
|
||||||
"workspace_region = dbutils.widgets.get('workspace_region')"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##TESTONLY\n",
|
|
||||||
"# import the Workspace class and check the azureml SDK version\n",
|
"# import the Workspace class and check the azureml SDK version\n",
|
||||||
"# exist_ok checks if workspace exists or not.\n",
|
"# exist_ok checks if workspace exists or not.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -111,29 +82,9 @@
|
|||||||
" subscription_id = subscription_id,\n",
|
" subscription_id = subscription_id,\n",
|
||||||
" resource_group = resource_group, \n",
|
" resource_group = resource_group, \n",
|
||||||
" location = workspace_region,\n",
|
" location = workspace_region,\n",
|
||||||
" auth = auth,\n",
|
|
||||||
" exist_ok=True)"
|
" exist_ok=True)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##PUBLISHONLY\n",
|
|
||||||
"## import the Workspace class and check the azureml SDK version\n",
|
|
||||||
"## exist_ok checks if workspace exists or not.\n",
|
|
||||||
"#\n",
|
|
||||||
"#from azureml.core import Workspace\n",
|
|
||||||
"#\n",
|
|
||||||
"#ws = Workspace.create(name = workspace_name,\n",
|
|
||||||
"# subscription_id = subscription_id,\n",
|
|
||||||
"# resource_group = resource_group, \n",
|
|
||||||
"# location = workspace_region,\n",
|
|
||||||
"# exist_ok=True)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
@@ -150,31 +101,14 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"##TESTONLY\n",
|
|
||||||
"ws = Workspace(workspace_name = workspace_name,\n",
|
"ws = Workspace(workspace_name = workspace_name,\n",
|
||||||
" subscription_id = subscription_id,\n",
|
" subscription_id = subscription_id,\n",
|
||||||
" resource_group = resource_group,\n",
|
" resource_group = resource_group)\n",
|
||||||
" auth = auth)\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
"# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
||||||
"ws.write_config()"
|
"ws.write_config()\n",
|
||||||
]
|
"##if you need to give a different path/filename please use this\n",
|
||||||
},
|
"##write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##PUBLISHONLY\n",
|
|
||||||
"#ws = Workspace(workspace_name = workspace_name,\n",
|
|
||||||
"# subscription_id = subscription_id,\n",
|
|
||||||
"# resource_group = resource_group)\n",
|
|
||||||
"#\n",
|
|
||||||
"## persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
|
||||||
"#ws.write_config()\n",
|
|
||||||
"###if you need to give a different path/filename please use this\n",
|
|
||||||
"###write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -192,11 +126,10 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"##TESTONLY\n",
|
|
||||||
"# import the Workspace class and check the azureml SDK version\n",
|
"# import the Workspace class and check the azureml SDK version\n",
|
||||||
"from azureml.core import Workspace\n",
|
"from azureml.core import Workspace\n",
|
||||||
"\n",
|
"\n",
|
||||||
"ws = Workspace.from_config(auth = auth)\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"#ws = Workspace.from_config(<full path>)\n",
|
"#ws = Workspace.from_config(<full path>)\n",
|
||||||
"print('Workspace name: ' + ws.name, \n",
|
"print('Workspace name: ' + ws.name, \n",
|
||||||
" 'Azure region: ' + ws.location, \n",
|
" 'Azure region: ' + ws.location, \n",
|
||||||
@@ -204,24 +137,6 @@
|
|||||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##PUBLISHONLY\n",
|
|
||||||
"## import the Workspace class and check the azureml SDK version\n",
|
|
||||||
"#from azureml.core import Workspace\n",
|
|
||||||
"#\n",
|
|
||||||
"#ws = Workspace.from_config()\n",
|
|
||||||
"##ws = Workspace.from_config(<full path>)\n",
|
|
||||||
"#print('Workspace name: ' + ws.name, \n",
|
|
||||||
"# 'Azure region: ' + ws.location, \n",
|
|
||||||
"# 'Subscription id: ' + ws.subscription_id, \n",
|
|
||||||
"# 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
@@ -240,9 +155,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -13,45 +13,31 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"We support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n",
|
"# Automated ML on Azure Databricks\n",
|
||||||
"\n",
|
"\n",
|
||||||
"**install azureml-sdk with Automated ML**\n",
|
"In this example we use the scikit-learn's <a href=\"http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset\" target=\"_blank\">digit dataset</a> to showcase how you can use AutoML for a simple classification problem.\n",
|
||||||
"* Source: Upload Python Egg or PyPi\n",
|
|
||||||
"* PyPi Name: `azureml-sdk[automl_databricks]`\n",
|
|
||||||
"* Select Install Library"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"# AutoML : Classification with Local Compute on Azure DataBricks\n",
|
|
||||||
"\n",
|
|
||||||
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"In this notebook you will learn how to:\n",
|
"In this notebook you will learn how to:\n",
|
||||||
"1. Create Azure Machine Learning Workspace object and initialize your notebook directory to easily reload this object from a configuration file.\n",
|
"1. Create Azure Machine Learning Workspace object and initialize your notebook directory to easily reload this object from a configuration file.\n",
|
||||||
"2. Create an `Experiment` in an existing `Workspace`.\n",
|
"2. Create an `Experiment` in an existing `Workspace`.\n",
|
||||||
"3. Configure AutoML using `AutoMLConfig`.\n",
|
"3. Configure Automated ML using `AutoMLConfig`.\n",
|
||||||
"4. Train the model using AzureDataBricks.\n",
|
"4. Train the model using Azure Databricks.\n",
|
||||||
"5. Explore the results.\n",
|
"5. Explore the results.\n",
|
||||||
"6. Test the best fitted model.\n",
|
"6. Test the best fitted model.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Prerequisites:\n",
|
"Before running this notebook, please follow the <a href=\"https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks\" target=\"_blank\">readme for using Automated ML on Azure Databricks</a> for installing necessary libraries to your cluster."
|
||||||
"Before running this notebook, please follow the readme for installing necessary libraries to your cluster."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Register Machine Learning Services Resource Provider\n",
|
"We support installing AML SDK with Automated ML as library from GUI. When attaching a library follow <a href=\"https://docs.databricks.com/user-guide/libraries.html\" target=\"_blank\">this link</a> and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n",
|
||||||
"Microsoft.MachineLearningServices only needs to be registed once in the subscription. To register it:\n",
|
"\n",
|
||||||
"Start the Azure portal.\n",
|
"**azureml-sdk with automated ml**\n",
|
||||||
"Select your All services and then Subscription.\n",
|
"* Source: Upload Python Egg or PyPi\n",
|
||||||
"Select the subscription that you want to use.\n",
|
"* PyPi Name: `azureml-sdk[automl_databricks]`\n",
|
||||||
"Click on Resource providers\n",
|
"* Select Install Library"
|
||||||
"Click the Register link next to Microsoft.MachineLearningServices"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -97,11 +83,10 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"##PUBLISHONLY\n",
|
"subscription_id = \"<Your SubscriptionId>\" #you should be owner or contributor\n",
|
||||||
"#subscription_id = \"<Your SubscriptionId>\"\n",
|
"resource_group = \"<Resource group - new or existing>\" #you should be owner or contributor\n",
|
||||||
"#resource_group = \"<Resource group - new or existing>\"\n",
|
"workspace_name = \"<workspace to be created>\" #your workspace name\n",
|
||||||
"#workspace_name = \"<workspace to be created>\"\n",
|
"workspace_region = \"<azureregion>\" #your region"
|
||||||
"#workspace_region = \"<azureregion>\" #eg. eastus2, westcentralus, westeurope"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -121,34 +106,6 @@
|
|||||||
"**Note:** Creation of a new workspace can take several minutes."
|
"**Note:** Creation of a new workspace can take several minutes."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##TESTONLY\n",
|
|
||||||
"# import auth creds from notebook parameters\n",
|
|
||||||
"tenant = dbutils.widgets.get('tenant_id')\n",
|
|
||||||
"username = dbutils.widgets.get('service_principal_id')\n",
|
|
||||||
"password = dbutils.widgets.get('service_principal_password')\n",
|
|
||||||
"\n",
|
|
||||||
"auth = azureml.core.authentication.ServicePrincipalAuthentication(tenant, username, password)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##TESTONLY\n",
|
|
||||||
"subscription_id = dbutils.widgets.get('subscription_id')\n",
|
|
||||||
"resource_group = dbutils.widgets.get('resource_group')\n",
|
|
||||||
"workspace_name = dbutils.widgets.get('workspace_name')\n",
|
|
||||||
"workspace_region = dbutils.widgets.get('workspace_region')"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
@@ -162,7 +119,6 @@
|
|||||||
" subscription_id = subscription_id,\n",
|
" subscription_id = subscription_id,\n",
|
||||||
" resource_group = resource_group, \n",
|
" resource_group = resource_group, \n",
|
||||||
" location = workspace_region, \n",
|
" location = workspace_region, \n",
|
||||||
" auth = auth,\n",
|
|
||||||
" exist_ok=True)\n",
|
" exist_ok=True)\n",
|
||||||
"ws.get_details()"
|
"ws.get_details()"
|
||||||
]
|
]
|
||||||
@@ -172,22 +128,7 @@
|
|||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": []
|
||||||
"##PUBLISHONLY\n",
|
|
||||||
"#from azureml.core import Workspace\n",
|
|
||||||
"#import azureml.core\n",
|
|
||||||
"#\n",
|
|
||||||
"## Check core SDK version number\n",
|
|
||||||
"#print(\"SDK version:\", azureml.core.VERSION)\n",
|
|
||||||
"#\n",
|
|
||||||
"##'''\n",
|
|
||||||
"#ws = Workspace.from_config()\n",
|
|
||||||
"#print('Workspace name: ' + ws.name, \n",
|
|
||||||
"# 'Azure region: ' + ws.location, \n",
|
|
||||||
"# 'Subscription id: ' + ws.subscription_id, \n",
|
|
||||||
"# 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
|
|
||||||
"##'''"
|
|
||||||
]
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
@@ -203,35 +144,16 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"##TESTONLY\n",
|
|
||||||
"from azureml.core import Workspace\n",
|
"from azureml.core import Workspace\n",
|
||||||
"\n",
|
"\n",
|
||||||
"ws = Workspace(workspace_name = workspace_name,\n",
|
"ws = Workspace(workspace_name = workspace_name,\n",
|
||||||
" subscription_id = subscription_id,\n",
|
" subscription_id = subscription_id,\n",
|
||||||
" resource_group = resource_group,\n",
|
" resource_group = resource_group)\n",
|
||||||
" auth = auth)\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
||||||
"ws.write_config()"
|
"ws.write_config()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##PUBLISHONLY\n",
|
|
||||||
"#from azureml.core import Workspace\n",
|
|
||||||
"#\n",
|
|
||||||
"#ws = Workspace(workspace_name = workspace_name,\n",
|
|
||||||
"# subscription_id = subscription_id,\n",
|
|
||||||
"# resource_group = resource_group)\n",
|
|
||||||
"#\n",
|
|
||||||
"## Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
|
||||||
"#ws.write_config()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -262,7 +184,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Create an Experiment\n",
|
"## Create an Experiment\n",
|
||||||
"\n",
|
"\n",
|
||||||
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
"As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -288,26 +210,6 @@
|
|||||||
"from azureml.train.automl.run import AutoMLRun"
|
"from azureml.train.automl.run import AutoMLRun"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##TESTONLY\n",
|
|
||||||
"ws = Workspace.from_config(auth = auth)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"##PUBLISHONLY\n",
|
|
||||||
"#ws = Workspace.from_config(auth = auth)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
@@ -364,6 +266,9 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"#Automated ML requires a dataflow, which is different from dataframe.\n",
|
||||||
|
"#If your data is in a dataframe, please use read_pandas_dataframe to convert a dataframe to dataflow before usind dprep.\n",
|
||||||
|
"\n",
|
||||||
"import azureml.dataprep as dprep\n",
|
"import azureml.dataprep as dprep\n",
|
||||||
"# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
|
"# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
|
||||||
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
|
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
|
||||||
@@ -435,7 +340,6 @@
|
|||||||
" spark_context=sc, #databricks/spark related\n",
|
" spark_context=sc, #databricks/spark related\n",
|
||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train,\n",
|
||||||
" enable_cache=False,\n",
|
|
||||||
" path = project_folder)"
|
" path = project_folder)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -480,7 +384,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(local_run.get_portal_url())"
|
"displayHTML(\"<a href={} target='_blank'>Your experiment in Azure Portal: {}</a>\".format(local_run.get_portal_url(), local_run.id))"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -608,7 +512,9 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"When deploying an automated ML trained model, please specify _pip_packages=['azureml-sdk[automl]']_ in your CondaDependencies."
|
"When deploying an automated ML trained model, please specify _pippackages=['azureml-sdk[automl]']_ in your CondaDependencies.\n",
|
||||||
|
"\n",
|
||||||
|
"Please refer to only the **Deploy** section in this notebook - <a href=\"https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-with-deployment\" target=\"_blank\">Deployment of Automated ML trained model</a>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -629,9 +535,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
@@ -646,8 +552,8 @@
|
|||||||
"version": "3.7.0"
|
"version": "3.7.0"
|
||||||
},
|
},
|
||||||
"name": "auto-ml-classification-local-adb",
|
"name": "auto-ml-classification-local-adb",
|
||||||
"notebookId": 3836944406456411
|
"notebookId": 817220787969977
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 1
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
@@ -0,0 +1,704 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||||
|
"\n",
|
||||||
|
"Licensed under the MIT License."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"We support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n",
|
||||||
|
"\n",
|
||||||
|
"**install azureml-sdk with Automated ML**\n",
|
||||||
|
"* Source: Upload Python Egg or PyPi\n",
|
||||||
|
"* PyPi Name: `azureml-sdk[automl_databricks]`\n",
|
||||||
|
"* Select Install Library"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# AutoML : Classification with Local Compute on Azure DataBricks with deployment to ACI\n",
|
||||||
|
"\n",
|
||||||
|
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
|
||||||
|
"\n",
|
||||||
|
"In this notebook you will learn how to:\n",
|
||||||
|
"1. Create Azure Machine Learning Workspace object and initialize your notebook directory to easily reload this object from a configuration file.\n",
|
||||||
|
"2. Create an `Experiment` in an existing `Workspace`.\n",
|
||||||
|
"3. Configure AutoML using `AutoMLConfig`.\n",
|
||||||
|
"4. Train the model using AzureDataBricks.\n",
|
||||||
|
"5. Explore the results.\n",
|
||||||
|
"6. Register the model.\n",
|
||||||
|
"7. Deploy the model.\n",
|
||||||
|
"8. Test the best fitted model.\n",
|
||||||
|
"\n",
|
||||||
|
"Prerequisites:\n",
|
||||||
|
"Before running this notebook, please follow the readme for installing necessary libraries to your cluster."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Register Machine Learning Services Resource Provider\n",
|
||||||
|
"Microsoft.MachineLearningServices only needs to be registed once in the subscription. To register it:\n",
|
||||||
|
"Start the Azure portal.\n",
|
||||||
|
"Select your All services and then Subscription.\n",
|
||||||
|
"Select the subscription that you want to use.\n",
|
||||||
|
"Click on Resource providers\n",
|
||||||
|
"Click the Register link next to Microsoft.MachineLearningServices"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Check the Azure ML Core SDK Version to Validate Your Installation"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import azureml.core\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"SDK Version:\", azureml.core.VERSION)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Initialize an Azure ML Workspace\n",
|
||||||
|
"### What is an Azure ML Workspace and Why Do I Need One?\n",
|
||||||
|
"\n",
|
||||||
|
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"### What do I Need?\n",
|
||||||
|
"\n",
|
||||||
|
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
|
||||||
|
"* A name for your workspace. You can choose one.\n",
|
||||||
|
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
|
||||||
|
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
|
||||||
|
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"subscription_id = \"<Your SubscriptionId>\"\n",
|
||||||
|
"resource_group = \"<Resource group - new or existing>\"\n",
|
||||||
|
"workspace_name = \"<workspace to be created>\"\n",
|
||||||
|
"workspace_region = \"<azureregion>\""
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Creating a Workspace\n",
|
||||||
|
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
|
||||||
|
"\n",
|
||||||
|
"This will fail when:\n",
|
||||||
|
"1. The workspace already exists.\n",
|
||||||
|
"2. You do not have permission to create a workspace in the resource group.\n",
|
||||||
|
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
|
||||||
|
"\n",
|
||||||
|
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
|
||||||
|
"\n",
|
||||||
|
"**Note:** Creation of a new workspace can take several minutes."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Import the Workspace class and check the Azure ML SDK version.\n",
|
||||||
|
"from azureml.core import Workspace\n",
|
||||||
|
"\n",
|
||||||
|
"ws = Workspace.create(name = workspace_name,\n",
|
||||||
|
" subscription_id = subscription_id,\n",
|
||||||
|
" resource_group = resource_group, \n",
|
||||||
|
" location = workspace_region,\n",
|
||||||
|
" exist_ok=True)\n",
|
||||||
|
"ws.get_details()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Configuring Your Local Environment\n",
|
||||||
|
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core import Workspace\n",
|
||||||
|
"\n",
|
||||||
|
"ws = Workspace(workspace_name = workspace_name,\n",
|
||||||
|
" subscription_id = subscription_id,\n",
|
||||||
|
" resource_group = resource_group)\n",
|
||||||
|
"\n",
|
||||||
|
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
|
||||||
|
"ws.write_config()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Create a Folder to Host Sample Projects\n",
|
||||||
|
"Finally, create a folder where all the sample projects will be hosted."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import os\n",
|
||||||
|
"\n",
|
||||||
|
"sample_projects_folder = './sample_projects'\n",
|
||||||
|
"\n",
|
||||||
|
"if not os.path.isdir(sample_projects_folder):\n",
|
||||||
|
" os.mkdir(sample_projects_folder)\n",
|
||||||
|
" \n",
|
||||||
|
"print('Sample projects will be created in {}.'.format(sample_projects_folder))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Create an Experiment\n",
|
||||||
|
"\n",
|
||||||
|
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import logging\n",
|
||||||
|
"import os\n",
|
||||||
|
"import random\n",
|
||||||
|
"import time\n",
|
||||||
|
"\n",
|
||||||
|
"from matplotlib import pyplot as plt\n",
|
||||||
|
"from matplotlib.pyplot import imshow\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import pandas as pd\n",
|
||||||
|
"\n",
|
||||||
|
"import azureml.core\n",
|
||||||
|
"from azureml.core.experiment import Experiment\n",
|
||||||
|
"from azureml.core.workspace import Workspace\n",
|
||||||
|
"from azureml.train.automl import AutoMLConfig\n",
|
||||||
|
"from azureml.train.automl.run import AutoMLRun"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Choose a name for the experiment and specify the project folder.\n",
|
||||||
|
"experiment_name = 'automl-local-classification'\n",
|
||||||
|
"project_folder = './sample_projects/automl-local-classification'\n",
|
||||||
|
"\n",
|
||||||
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
|
"\n",
|
||||||
|
"output = {}\n",
|
||||||
|
"output['SDK version'] = azureml.core.VERSION\n",
|
||||||
|
"output['Subscription ID'] = ws.subscription_id\n",
|
||||||
|
"output['Workspace Name'] = ws.name\n",
|
||||||
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
|
"output['Location'] = ws.location\n",
|
||||||
|
"output['Project Directory'] = project_folder\n",
|
||||||
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
|
"pd.DataFrame(data = output, index = ['']).T"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Diagnostics\n",
|
||||||
|
"\n",
|
||||||
|
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||||
|
"set_diagnostics_collection(send_diagnostics = True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Load Training Data Using DataPrep"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import azureml.dataprep as dprep\n",
|
||||||
|
"# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
|
||||||
|
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
|
||||||
|
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
|
||||||
|
"X_train = dprep.auto_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n",
|
||||||
|
"\n",
|
||||||
|
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n",
|
||||||
|
"# and convert column types manually.\n",
|
||||||
|
"# Here we read a comma delimited file and convert all columns to integers.\n",
|
||||||
|
"y_train = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Review the Data Preparation Result\n",
|
||||||
|
"You can peek the result of a Dataflow at any range using skip(i) and head(j). Doing so evaluates only j records for all the steps in the Dataflow, which makes it fast even against large datasets."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"X_train.skip(1).head(5)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Configure AutoML\n",
|
||||||
|
"\n",
|
||||||
|
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
|
||||||
|
"\n",
|
||||||
|
"|Property|Description|\n",
|
||||||
|
"|-|-|\n",
|
||||||
|
"|**task**|classification or regression|\n",
|
||||||
|
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
||||||
|
"|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|\n",
|
||||||
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
|
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||||
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
|
"|**spark_context**|Spark Context object. for Databricks, use spark_context=sc|\n",
|
||||||
|
"|**max_concurrent_iterations**|Maximum number of iterations to execute in parallel. This should be <= number of worker nodes in your Azure Databricks cluster.|\n",
|
||||||
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
|
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
|
||||||
|
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
|
||||||
|
"|**preprocess**|set this to True to enable pre-processing of data eg. string to numeric using one-hot encoding|\n",
|
||||||
|
"|**exit_score**|Target score for experiment. It is associated with the metric. eg. exit_score=0.995 will exit experiment after that|"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||||
|
" debug_log = 'automl_errors.log',\n",
|
||||||
|
" primary_metric = 'AUC_weighted',\n",
|
||||||
|
" iteration_timeout_minutes = 10,\n",
|
||||||
|
" iterations = 5,\n",
|
||||||
|
" n_cross_validations = 2,\n",
|
||||||
|
" max_concurrent_iterations = 4, #change it based on number of worker nodes\n",
|
||||||
|
" verbosity = logging.INFO,\n",
|
||||||
|
" spark_context=sc, #databricks/spark related\n",
|
||||||
|
" X = X_train, \n",
|
||||||
|
" y = y_train,\n",
|
||||||
|
" enable_cache=False,\n",
|
||||||
|
" path = project_folder)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Train the Models\n",
|
||||||
|
"\n",
|
||||||
|
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
|
||||||
|
"In this example, we specify `show_output = True` to print currently running iterations to the console."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"local_run = experiment.submit(automl_config, show_output = True) # for higher runs please use show_output=False and use the below"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Explore the Results"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Portal URL for Monitoring Runs\n",
|
||||||
|
"\n",
|
||||||
|
"The following will provide a link to the web interface to explore individual run details and status. In the future we might support output displayed in the notebook."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"displayHTML(\"<a href={} target='_blank'>Azure Portal: {}</a>\".format(local_run.get_portal_url(), local_run.id))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"The following will show the child runs and waits for the parent run to complete."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Retrieve All Child Runs after the experiment is completed (in portal)\n",
|
||||||
|
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"children = list(local_run.get_children())\n",
|
||||||
|
"metricslist = {}\n",
|
||||||
|
"for run in children:\n",
|
||||||
|
" properties = run.get_properties()\n",
|
||||||
|
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
|
||||||
|
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||||
|
"\n",
|
||||||
|
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
||||||
|
"rundata"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Retrieve the Best Model after the above run is complete \n",
|
||||||
|
"\n",
|
||||||
|
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"best_run, fitted_model = local_run.get_output()\n",
|
||||||
|
"print(best_run)\n",
|
||||||
|
"print(fitted_model)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Best Model Based on Any Other Metric after the above run is complete based on the child run\n",
|
||||||
|
"Show the run and the model that has the smallest `log_loss` value:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"lookup_metric = \"log_loss\"\n",
|
||||||
|
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
|
||||||
|
"print(best_run)\n",
|
||||||
|
"print(fitted_model)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Register the Fitted Model for Deployment\n",
|
||||||
|
"If neither metric nor iteration are specified in the register_model call, the iteration with the best primary metric is registered."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"description = 'AutoML Model'\n",
|
||||||
|
"tags = None\n",
|
||||||
|
"model = local_run.register_model(description = description, tags = tags)\n",
|
||||||
|
"local_run.model_id # This will be written to the scoring script file later in the notebook."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Create Scoring Script\n",
|
||||||
|
"Replace model_id with name of model from output of above register cell"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%%writefile score.py\n",
|
||||||
|
"import pickle\n",
|
||||||
|
"import json\n",
|
||||||
|
"import numpy\n",
|
||||||
|
"import azureml.train.automl\n",
|
||||||
|
"from sklearn.externals import joblib\n",
|
||||||
|
"from azureml.core.model import Model\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def init():\n",
|
||||||
|
" global model\n",
|
||||||
|
" model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy\n",
|
||||||
|
" # deserialize the model file back into a sklearn model\n",
|
||||||
|
" model = joblib.load(model_path)\n",
|
||||||
|
"\n",
|
||||||
|
"def run(rawdata):\n",
|
||||||
|
" try:\n",
|
||||||
|
" data = json.loads(rawdata)['data']\n",
|
||||||
|
" data = numpy.array(data)\n",
|
||||||
|
" result = model.predict(data)\n",
|
||||||
|
" except Exception as e:\n",
|
||||||
|
" result = str(e)\n",
|
||||||
|
" return json.dumps({\"error\": result})\n",
|
||||||
|
" return json.dumps({\"result\":result.tolist()})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Create a YAML File for the Environment"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||||
|
"\n",
|
||||||
|
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])\n",
|
||||||
|
"\n",
|
||||||
|
"conda_env_file_name = 'mydeployenv.yml'\n",
|
||||||
|
"myenv.save_to_file('.', conda_env_file_name)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Create ACI config"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"#deploy to ACI\n",
|
||||||
|
"from azureml.core.webservice import AciWebservice, Webservice\n",
|
||||||
|
"\n",
|
||||||
|
"myaci_config = AciWebservice.deploy_configuration(\n",
|
||||||
|
" cpu_cores = 2, \n",
|
||||||
|
" memory_gb = 2, \n",
|
||||||
|
" tags = {'name':'Databricks Azure ML ACI'}, \n",
|
||||||
|
" description = 'This is for ADB and AutoML example.')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Deploy the Image as a Web Service on Azure Container Instance\n",
|
||||||
|
"Replace servicename with any meaningful name of service"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"\n",
|
||||||
|
"# this will take 10-15 minutes to finish\n",
|
||||||
|
"\n",
|
||||||
|
"service_name = \"<<servicename>>\"\n",
|
||||||
|
"runtime = \"spark-py\" \n",
|
||||||
|
"driver_file = \"score.py\"\n",
|
||||||
|
"my_conda_file = \"mydeployenv.yml\"\n",
|
||||||
|
"\n",
|
||||||
|
"# image creation\n",
|
||||||
|
"from azureml.core.image import ContainerImage\n",
|
||||||
|
"myimage_config = ContainerImage.image_configuration(execution_script = driver_file, \n",
|
||||||
|
" runtime = runtime, \n",
|
||||||
|
" conda_file = 'mydeployenv.yml')\n",
|
||||||
|
"\n",
|
||||||
|
"# Webservice creation\n",
|
||||||
|
"myservice = Webservice.deploy_from_model(\n",
|
||||||
|
" workspace=ws, \n",
|
||||||
|
" name=service_name,\n",
|
||||||
|
" deployment_config = myaci_config,\n",
|
||||||
|
" models = [model],\n",
|
||||||
|
" image_config = myimage_config\n",
|
||||||
|
" )\n",
|
||||||
|
"\n",
|
||||||
|
"myservice.wait_for_deployment(show_output=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"#for using the Web HTTP API \n",
|
||||||
|
"print(myservice.scoring_uri)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Test the Best Fitted Model\n",
|
||||||
|
"\n",
|
||||||
|
"#### Load Test Data - you can split the dataset beforehand & pass Train dataset to AutoML and use Test dataset to evaluate the best model."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from sklearn import datasets\n",
|
||||||
|
"digits = datasets.load_digits()\n",
|
||||||
|
"X_test = digits.data[:10, :]\n",
|
||||||
|
"y_test = digits.target[:10]\n",
|
||||||
|
"images = digits.images[:10]"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Testing Our Best Fitted Model\n",
|
||||||
|
"We will try to predict digits and see how our model works. This is just an example to show you."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Randomly select digits and test.\n",
|
||||||
|
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
|
||||||
|
" print(index)\n",
|
||||||
|
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
|
||||||
|
" label = y_test[index]\n",
|
||||||
|
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
|
||||||
|
" fig = plt.figure(1, figsize = (3,3))\n",
|
||||||
|
" ax1 = fig.add_axes((0,0,.8,.8))\n",
|
||||||
|
" ax1.set_title(title)\n",
|
||||||
|
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
|
||||||
|
" display(fig)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"authors": [
|
||||||
|
{
|
||||||
|
"name": "savitam"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "wamartin"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3.6",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python36"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.7.0"
|
||||||
|
},
|
||||||
|
"name": "auto-ml-classification-local-adb",
|
||||||
|
"notebookId": 3888835968049288
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0
|
||||||
|
}
|
||||||
@@ -473,9 +473,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python [default]",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -403,11 +403,11 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### b. Connect Blob to Power Bi (Small Data only)\n",
|
"### b. Connect Blob to Power Bi (Small Data only)\n",
|
||||||
"1. Download and Open PowerBi Desktop\n",
|
"1. Download and Open PowerBi Desktop\n",
|
||||||
"2. Select “Get Data” and click on “Azure Blob Storage” >> Connect\n",
|
"2. Select \u201cGet Data\u201d and click on \u201cAzure Blob Storage\u201d >> Connect\n",
|
||||||
"3. Add your storage account and enter your storage key.\n",
|
"3. Add your storage account and enter your storage key.\n",
|
||||||
"4. Select the container where your Data Collection is stored and click on Edit. \n",
|
"4. Select the container where your Data Collection is stored and click on Edit. \n",
|
||||||
"5. In the query editor, click under “Name” column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3\n",
|
"5. In the query editor, click under \u201cName\u201d column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3\n",
|
||||||
"6. Click on the double arrow aside the “Content” column to combine the files. \n",
|
"6. Click on the double arrow aside the \u201cContent\u201d column to combine the files. \n",
|
||||||
"7. Click OK and the data will preload.\n",
|
"7. Click OK and the data will preload.\n",
|
||||||
"8. You can now click Close and Apply and start building your custom reports on your Model Input data."
|
"8. You can now click Close and Apply and start building your custom reports on your Model Input data."
|
||||||
]
|
]
|
||||||
@@ -455,9 +455,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python [default]",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ These tutorials show how to create and deploy Open Neural Network eXchange ([ONN
|
|||||||
|
|
||||||
## Tutorials
|
## Tutorials
|
||||||
|
|
||||||
0. [Configure your Azure Machine Learning Workspace](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb)
|
0. [Configure your Azure Machine Learning Workspace](../../../configuration.ipynb)
|
||||||
|
|
||||||
#### Obtain models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime Inference
|
#### Obtain models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime Inference
|
||||||
1. [Handwritten Digit Classification (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb)
|
1. [Handwritten Digit Classification (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb)
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"To make the best use of your time, make sure you have done the following:\n",
|
"To make the best use of your time, make sure you have done the following:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
||||||
"* Go through the [00.configuration.ipynb](../00.configuration.ipynb) notebook to:\n",
|
"* Go through the [configuration](../../../configuration.ipynb) notebook to:\n",
|
||||||
" * install the AML SDK\n",
|
" * install the AML SDK\n",
|
||||||
" * create a workspace and its configuration file (config.json)"
|
" * create a workspace and its configuration file (config.json)"
|
||||||
]
|
]
|
||||||
@@ -71,7 +71,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Convert model to ONNX\n",
|
"## Convert model to ONNX\n",
|
||||||
"\n",
|
"\n",
|
||||||
"First we download the CoreML model. We use the CoreML model listed at https://coreml.store/tinyyolo. This may take a few minutes."
|
"First we download the CoreML model. We use the CoreML model from [Matthijs Hollemans's tutorial](https://github.com/hollance/YOLO-CoreML-MPSNNGraph). This may take a few minutes."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -82,8 +82,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import urllib.request\n",
|
"import urllib.request\n",
|
||||||
"\n",
|
"\n",
|
||||||
"onnx_model_url = \"https://s3-us-west-2.amazonaws.com/coreml-models/TinyYOLO.mlmodel\"\n",
|
"coreml_model_url = \"https://github.com/hollance/YOLO-CoreML-MPSNNGraph/raw/master/TinyYOLO-CoreML/TinyYOLO-CoreML/TinyYOLO.mlmodel\"\n",
|
||||||
"urllib.request.urlretrieve(onnx_model_url, filename=\"TinyYOLO.mlmodel\")\n"
|
"urllib.request.urlretrieve(coreml_model_url, filename=\"TinyYOLO.mlmodel\")\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -34,7 +34,7 @@
|
|||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### 1. Install Azure ML SDK and create a new workspace\n",
|
"### 1. Install Azure ML SDK and create a new workspace\n",
|
||||||
"Please follow [Azure ML configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) to set up your environment.\n",
|
"Please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### 2. Install additional packages needed for this Notebook\n",
|
"### 2. Install additional packages needed for this Notebook\n",
|
||||||
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n",
|
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n",
|
||||||
|
|||||||
@@ -34,7 +34,7 @@
|
|||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### 1. Install Azure ML SDK and create a new workspace\n",
|
"### 1. Install Azure ML SDK and create a new workspace\n",
|
||||||
"Please follow [Azure ML configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) to set up your environment.\n",
|
"Please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"### 2. Install additional packages needed for this tutorial notebook\n",
|
"### 2. Install additional packages needed for this tutorial notebook\n",
|
||||||
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed. \n",
|
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed. \n",
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"To make the best use of your time, make sure you have done the following:\n",
|
"To make the best use of your time, make sure you have done the following:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
||||||
"* Go through the [00.configuration.ipynb](../00.configuration.ipynb) notebook to:\n",
|
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
|
||||||
" * install the AML SDK\n",
|
" * install the AML SDK\n",
|
||||||
" * create a workspace and its configuration file (config.json)"
|
" * create a workspace and its configuration file (config.json)"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -13,7 +13,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## 10. Register Model, Create Image and Deploy Service\n",
|
"## Register Model, Create Image and Deploy Service\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This example shows how to deploy a web service in step-by-step fashion:\n",
|
"This example shows how to deploy a web service in step-by-step fashion:\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -24,9 +24,9 @@
|
|||||||
" 5. Deploy the image as web service\n",
|
" 5. Deploy the image as web service\n",
|
||||||
" \n",
|
" \n",
|
||||||
"**IMPORTANT**:\n",
|
"**IMPORTANT**:\n",
|
||||||
" * This notebook requires you to first complete \"01.SDK-101-Train-and-Deploy-to-ACI.ipynb\" Notebook\n",
|
" * This notebook requires you to first complete [train-within-notebook](../../training/train-within-notebook/train-within-notebook.ipynb) example\n",
|
||||||
" \n",
|
" \n",
|
||||||
"The 101 Notebook taught you how to deploy a web service directly from model in one step. This Notebook shows a more advanced approach that gives you more control over model versions and Docker image versions. "
|
"The train-within-notebook example taught you how to deploy a web service directly from model in one step. This Notebook shows a more advanced approach that gives you more control over model versions and Docker image versions. "
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -34,7 +34,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
|
"Make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -310,9 +310,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -260,9 +260,9 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property\n",
|
"# For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
|
||||||
"# example: un-comment the following line.\n",
|
"# example: un-comment the following line.\n",
|
||||||
"# print(aml_compute.status.serialize())"
|
"# print(aml_compute.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -584,9 +584,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -100,9 +100,9 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property\n",
|
"# For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
|
||||||
"# example: un-comment the following line.\n",
|
"# example: un-comment the following line.\n",
|
||||||
"# print(aml_compute.status.serialize())"
|
"# print(aml_compute.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -346,9 +346,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -346,9 +346,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python [default]",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -317,8 +317,9 @@
|
|||||||
"- **existing_cluster_id:** Cluster ID of an existing Interactive cluster on the Databricks workspace. If you are providing this, do not provide any of the parameters below that are used to create a new cluster such as spark_version, node_type, etc.\n",
|
"- **existing_cluster_id:** Cluster ID of an existing Interactive cluster on the Databricks workspace. If you are providing this, do not provide any of the parameters below that are used to create a new cluster such as spark_version, node_type, etc.\n",
|
||||||
"- **spark_version:** Version of spark for the databricks run cluster. default value: 4.0.x-scala2.11\n",
|
"- **spark_version:** Version of spark for the databricks run cluster. default value: 4.0.x-scala2.11\n",
|
||||||
"- **node_type:** Azure vm node types for the databricks run cluster. default value: Standard_D3_v2\n",
|
"- **node_type:** Azure vm node types for the databricks run cluster. default value: Standard_D3_v2\n",
|
||||||
"- **num_workers:** Number of workers for the databricks run cluster\n",
|
"- **num_workers:** Specifies a static number of workers for the databricks run cluster\n",
|
||||||
"- **autoscale:** The autoscale configuration for the databricks run cluster\n",
|
"- **min_workers:** Specifies a min number of workers to use for auto-scaling the databricks run cluster\n",
|
||||||
|
"- **max_workers:** Specifies a max number of workers to use for auto-scaling the databricks run cluster\n",
|
||||||
"- **spark_env_variables:** Spark environment variables for the databricks run cluster (dictionary of {str:str}). default value: {'PYSPARK_PYTHON': '/databricks/python3/bin/python3'}\n",
|
"- **spark_env_variables:** Spark environment variables for the databricks run cluster (dictionary of {str:str}). default value: {'PYSPARK_PYTHON': '/databricks/python3/bin/python3'}\n",
|
||||||
"- **notebook_path:** Path to the notebook in the databricks instance. If you are providing this, do not provide python script related paramaters or JAR related parameters.\n",
|
"- **notebook_path:** Path to the notebook in the databricks instance. If you are providing this, do not provide python script related paramaters or JAR related parameters.\n",
|
||||||
"- **notebook_params:** Parameters for the databricks notebook (dictionary of {str:str}). Fetch this inside the notebook using dbutils.widgets.get(\"myparam\")\n",
|
"- **notebook_params:** Parameters for the databricks notebook (dictionary of {str:str}). Fetch this inside the notebook using dbutils.widgets.get(\"myparam\")\n",
|
||||||
@@ -342,7 +343,7 @@
|
|||||||
"- **version:** Optional version tag to denote a change in functionality for the step\n",
|
"- **version:** Optional version tag to denote a change in functionality for the step\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\* *denotes required fields* \n",
|
"\\* *denotes required fields* \n",
|
||||||
"*You must provide exactly one of num_workers or autoscale paramaters* \n",
|
"*You must provide exactly one of num_workers or min_workers and max_workers paramaters* \n",
|
||||||
"*You must provide exactly one of databricks_compute or databricks_compute_name parameters*\n",
|
"*You must provide exactly one of databricks_compute or databricks_compute_name parameters*\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Use runconfig to specify library dependencies\n",
|
"## Use runconfig to specify library dependencies\n",
|
||||||
@@ -388,7 +389,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### 1. Running the demo notebook already added to the Databricks workspace\n",
|
"### 1. Running the demo notebook already added to the Databricks workspace\n",
|
||||||
"Create a notebook in the Azure Databricks workspace, and provide the path to that notebook as the value associated with the environment variable \"DATABRICKS_NOTEBOOK_PATH\". This will then set the variable notebook_path when you run the code cell below:"
|
"Create a notebook in the Azure Databricks workspace, and provide the path to that notebook as the value associated with the environment variable \"DATABRICKS_NOTEBOOK_PATH\". This will then set the variable\u00c2\u00a0notebook_path\u00c2\u00a0when you run the code cell below:"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -425,11 +426,10 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#PUBLISHONLY\n",
|
"steps = [dbNbStep]\n",
|
||||||
"#steps = [dbNbStep]\n",
|
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
||||||
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
"pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline)\n",
|
||||||
"#pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline)\n",
|
"pipeline_run.wait_for_completion()"
|
||||||
"#pipeline_run.wait_for_completion()"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -445,9 +445,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#PUBLISHONLY\n",
|
"from azureml.widgets import RunDetails\n",
|
||||||
"#from azureml.widgets import RunDetails\n",
|
"RunDetails(pipeline_run).show()"
|
||||||
"#RunDetails(pipeline_run).show()"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -497,11 +496,10 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#PUBLISHONLY\n",
|
"steps = [dbPythonInDbfsStep]\n",
|
||||||
"#steps = [dbPythonInDbfsStep]\n",
|
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
||||||
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
"pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline)\n",
|
||||||
"#pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline)\n",
|
"pipeline_run.wait_for_completion()"
|
||||||
"#pipeline_run.wait_for_completion()"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -517,9 +515,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#PUBLISHONLY\n",
|
"from azureml.widgets import RunDetails\n",
|
||||||
"#from azureml.widgets import RunDetails\n",
|
"RunDetails(pipeline_run).show()"
|
||||||
"#RunDetails(pipeline_run).show()"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -640,11 +637,10 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#PUBLISHONLY\n",
|
"steps = [dbJarInDbfsStep]\n",
|
||||||
"#steps = [dbJarInDbfsStep]\n",
|
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
||||||
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
|
"pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline)\n",
|
||||||
"#pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline)\n",
|
"pipeline_run.wait_for_completion()"
|
||||||
"#pipeline_run.wait_for_completion()"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -660,9 +656,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"#PUBLISHONLY\n",
|
"from azureml.widgets import RunDetails\n",
|
||||||
"#from azureml.widgets import RunDetails\n",
|
"RunDetails(pipeline_run).show()"
|
||||||
"#RunDetails(pipeline_run).show()"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -681,9 +676,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -158,9 +158,9 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property\n",
|
"# For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
|
||||||
"# example: un-comment the following line.\n",
|
"# example: un-comment the following line.\n",
|
||||||
"# print(aml_compute.status.serialize())"
|
"# print(aml_compute.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -396,9 +396,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -76,7 +76,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Set up datastores\n",
|
"### Set up datastores\n",
|
||||||
"First, let’s access the datastore that has the model, labels, and images. \n",
|
"First, let\u00e2\u20ac\u2122s access the datastore that has the model, labels, and images. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"### Create a datastore that points to a blob container containing sample images\n",
|
"### Create a datastore that points to a blob container containing sample images\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -106,7 +106,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Next, let’s specify the default datastore for the outputs."
|
"Next, let\u00e2\u20ac\u2122s specify the default datastore for the outputs."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -193,8 +193,8 @@
|
|||||||
" # if no min node count is provided it will use the scale settings for the cluster\n",
|
" # if no min node count is provided it will use the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
" \n",
|
" \n",
|
||||||
" # For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property \n",
|
" # For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
|
||||||
" print(compute_target.status.serialize())"
|
" print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -295,7 +295,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Build and run the batch scoring pipeline\n",
|
"## Build and run the batch scoring pipeline\n",
|
||||||
"You have everything you need to build the pipeline. Let’s put all these together."
|
"You have everything you need to build the pipeline. Let\u00e2\u20ac\u2122s put all these together."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -551,9 +551,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -588,9 +588,9 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
|
|||||||
@@ -23,7 +23,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
||||||
"* Go through the [00.configuration.ipynb]() notebook to:\n",
|
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
|
||||||
" * install the AML SDK\n",
|
" * install the AML SDK\n",
|
||||||
" * create a workspace and its configuration file (`config.json`)"
|
" * create a workspace and its configuration file (`config.json`)"
|
||||||
]
|
]
|
||||||
@@ -124,8 +124,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" compute_target.wait_for_completion(show_output=True)\n",
|
" compute_target.wait_for_completion(show_output=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Use the 'status' property to get a detailed status for the current AmlCompute. \n",
|
"# use get_status() to get a detailed status for the current AmlCompute. \n",
|
||||||
"print(compute_target.status.serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"* Go through the [Configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`\n",
|
"* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`\n",
|
||||||
"* Review the [tutorial](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) on single-node PyTorch training using Azure Machine Learning"
|
"* Review the [tutorial](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) on single-node PyTorch training using Azure Machine Learning"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -122,8 +122,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" compute_target.wait_for_completion(show_output=True)\n",
|
" compute_target.wait_for_completion(show_output=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Use the 'status' property to get a detailed status for the current AmlCompute. \n",
|
"# use get_status() to get a detailed status for the current AmlCompute. \n",
|
||||||
"print(compute_target.status.serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -23,7 +23,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n",
|
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n",
|
||||||
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
|
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
|
||||||
" * install the AML SDK\n",
|
" * install the AML SDK\n",
|
||||||
" * create a workspace and its configuration file (`config.json`)\n",
|
" * create a workspace and its configuration file (`config.json`)\n",
|
||||||
"* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK"
|
"* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK"
|
||||||
@@ -124,8 +124,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" compute_target.wait_for_completion(show_output=True)\n",
|
" compute_target.wait_for_completion(show_output=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Use the 'status' property to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.status.serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -23,7 +23,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n",
|
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n",
|
||||||
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
|
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
|
||||||
" * install the AML SDK\n",
|
" * install the AML SDK\n",
|
||||||
" * create a workspace and its configuration file (`config.json`)\n",
|
" * create a workspace and its configuration file (`config.json`)\n",
|
||||||
"* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK"
|
"* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK"
|
||||||
@@ -124,8 +124,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" compute_target.wait_for_completion(show_output=True)\n",
|
" compute_target.wait_for_completion(show_output=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Use the 'status' property to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.status.serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -26,7 +26,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
||||||
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
|
"* Go through the [configuration notebook](../../../configuration.ipynb) notebook to:\n",
|
||||||
" * install the AML SDK\n",
|
" * install the AML SDK\n",
|
||||||
" * create a workspace and its configuration file (`config.json`)"
|
" * create a workspace and its configuration file (`config.json`)"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -27,7 +27,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
||||||
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
|
"* Go through the [configuration notebook](../../../configuration.ipynb) notebook to:\n",
|
||||||
" * install the AML SDK\n",
|
" * install the AML SDK\n",
|
||||||
" * create a workspace and its configuration file (`config.json`)"
|
" * create a workspace and its configuration file (`config.json`)"
|
||||||
]
|
]
|
||||||
@@ -423,8 +423,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"compute_target.wait_for_completion(show_output=True, min_node_count=1, timeout_in_minutes=20)\n",
|
"compute_target.wait_for_completion(show_output=True, min_node_count=1, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Use the 'status' property to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.status.serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -25,7 +25,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"* Go through the [Configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`"
|
"* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -123,8 +123,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" compute_target.wait_for_completion(show_output=True)\n",
|
" compute_target.wait_for_completion(show_output=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Use the 'status' property to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.status.serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -26,7 +26,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"## Prerequisite:\n",
|
"## Prerequisite:\n",
|
||||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
||||||
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
|
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
|
||||||
" * install the AML SDK\n",
|
" * install the AML SDK\n",
|
||||||
" * create a workspace and its configuration file (`config.json`)"
|
" * create a workspace and its configuration file (`config.json`)"
|
||||||
]
|
]
|
||||||
@@ -299,8 +299,8 @@
|
|||||||
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
" # if no min node count is provided it uses the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Use the 'status' property to get a detailed status for the current cluster. \n",
|
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||||
"print(compute_target.status.serialize())"
|
"print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"Make sure you go through the [00. Installation and Configuration](../../00.configuration.ipynb) Notebook first if you haven't. Also make sure you have tqdm and matplotlib installed in the current kernel.\n",
|
"Make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't. Also make sure you have tqdm and matplotlib installed in the current kernel.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"```\n",
|
"```\n",
|
||||||
"(myenv) $ conda install -y tqdm matplotlib\n",
|
"(myenv) $ conda install -y tqdm matplotlib\n",
|
||||||
|
|||||||
@@ -31,7 +31,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"Make sure you go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) Notebook first if you haven't."
|
"Make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -119,7 +119,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.\n",
|
"First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"You can also pass a different region to check availability and then re-create your workspace in that region through the [00. Installation and Configuration](00.configuration.ipynb)"
|
"You can also pass a different region to check availability and then re-create your workspace in that region through the [configuration notebook](../../../configuration.ipynb)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -29,7 +29,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
|
"Make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -30,7 +30,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Prerequisites\n",
|
"## Prerequisites\n",
|
||||||
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
|
"Make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -190,7 +190,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Create and Attach a DSVM as a compute target\n",
|
"## Create and Attach a DSVM as a compute target\n",
|
||||||
"\n",
|
"\n",
|
||||||
"**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.\n",
|
"**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"```shell\n",
|
"```shell\n",
|
||||||
"# create a DSVM in your resource group\n",
|
"# create a DSVM in your resource group\n",
|
||||||
@@ -209,9 +209,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.core.compute import RemoteCompute\n",
|
"from azureml.core.compute import ComputeTarget, RemoteCompute\n",
|
||||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
"import os\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')\n",
|
"username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')\n",
|
||||||
"address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')\n",
|
"address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')\n",
|
||||||
@@ -222,13 +221,13 @@
|
|||||||
" attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)\n",
|
" attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)\n",
|
||||||
" print('found existing:', attached_dsvm_compute.name)\n",
|
" print('found existing:', attached_dsvm_compute.name)\n",
|
||||||
"except ComputeTargetException:\n",
|
"except ComputeTargetException:\n",
|
||||||
" attached_dsvm_compute = RemoteCompute.attach(workspace=ws,\n",
|
" attach_config = RemoteCompute.attach_configuration(address=address,\n",
|
||||||
" name=compute_target_name,\n",
|
|
||||||
" username=username,\n",
|
|
||||||
" address=address,\n",
|
|
||||||
" ssh_port=22,\n",
|
" ssh_port=22,\n",
|
||||||
|
" username=username,\n",
|
||||||
" private_key_file='./.ssh/id_rsa')\n",
|
" private_key_file='./.ssh/id_rsa')\n",
|
||||||
" \n",
|
" attached_dsvm_compute = ComputeTarget.attach(workspace=ws,\n",
|
||||||
|
" name=compute_target_name,\n",
|
||||||
|
" attach_config=attach_config)\n",
|
||||||
" attached_dsvm_compute.wait_for_completion(show_output=True)"
|
" attached_dsvm_compute.wait_for_completion(show_output=True)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -296,7 +295,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.core import Run\n",
|
|
||||||
"from azureml.core import ScriptRunConfig\n",
|
"from azureml.core import ScriptRunConfig\n",
|
||||||
"\n",
|
"\n",
|
||||||
"src = ScriptRunConfig(source_directory=script_folder, \n",
|
"src = ScriptRunConfig(source_directory=script_folder, \n",
|
||||||
@@ -386,7 +384,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"You can choose to SSH into the VM and install Azure ML SDK, and any other missing dependencies, in that Python environment. For demonstration purposes, we simply are going to create another script `train2.py` that doesn't have azureml dependencies, and submit it instead."
|
"You can choose to SSH into the VM and install Azure ML SDK, and any other missing dependencies, in that Python environment. For demonstration purposes, we simply are going to use another script `train2.py` that doesn't have azureml dependencies, and submit it instead."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -395,11 +393,11 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"%%writefile $script_folder/train2.py\n",
|
"# copy train2.py into the script folder\n",
|
||||||
|
"shutil.copy('./train2.py', os.path.join(script_folder, 'train2.py'))\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print('####################################')\n",
|
"with open(os.path.join(script_folder, './train2.py'), 'r') as training_script:\n",
|
||||||
"print('Hello World (without Azure ML SDK)!')\n",
|
" print(training_script.read())"
|
||||||
"print('####################################')"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -452,10 +450,6 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.core.runconfig import RunConfiguration\n",
|
|
||||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
|
||||||
"\n",
|
|
||||||
"\n",
|
|
||||||
"# Load the \"cpu-dsvm.runconfig\" file (created by the above attach operation) in memory\n",
|
"# Load the \"cpu-dsvm.runconfig\" file (created by the above attach operation) in memory\n",
|
||||||
"docker_run_config = RunConfiguration(framework=\"python\")\n",
|
"docker_run_config = RunConfiguration(framework=\"python\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|||||||
6
how-to-use-azureml/training/train-on-remote-vm/train2.py
Normal file
6
how-to-use-azureml/training/train-on-remote-vm/train2.py
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
# Copyright (c) Microsoft. All rights reserved.
|
||||||
|
# Licensed under the MIT license.
|
||||||
|
|
||||||
|
print('####################################')
|
||||||
|
print('Hello World (without Azure ML SDK)!')
|
||||||
|
print('####################################')
|
||||||
@@ -57,7 +57,7 @@
|
|||||||
"---\n",
|
"---\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Setup\n",
|
"## Setup\n",
|
||||||
"Make sure you have completed the [Configuration](..\\..\\configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.\n",
|
"Make sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.\n",
|
"We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.\n",
|
||||||
"```shell\n",
|
"```shell\n",
|
||||||
@@ -78,7 +78,7 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core import Experiment, Run, Workspace\n",
|
"from azureml.core import Experiment, Workspace\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Check core SDK version number\n",
|
"# Check core SDK version number\n",
|
||||||
"print(\"This notebook was created using version 1.0.2 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.0.2 of the Azure ML SDK\")\n",
|
||||||
@@ -568,7 +568,6 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import requests\n",
|
"import requests\n",
|
||||||
"import json\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"# use the first row from the test set again\n",
|
"# use the first row from the test set again\n",
|
||||||
"test_samples = json.dumps({\"data\": X_test[0:1, :].tolist()})\n",
|
"test_samples = json.dumps({\"data\": X_test[0:1, :].tolist()})\n",
|
||||||
@@ -598,7 +597,6 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"%matplotlib inline\n",
|
"%matplotlib inline\n",
|
||||||
"import matplotlib\n",
|
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"\n",
|
"\n",
|
||||||
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})\n",
|
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})\n",
|
||||||
@@ -607,13 +605,13 @@
|
|||||||
"f.set_figheight(6)\n",
|
"f.set_figheight(6)\n",
|
||||||
"f.set_figwidth(14)\n",
|
"f.set_figwidth(14)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"a0.plot(residual, 'bo', alpha=0.4);\n",
|
"a0.plot(residual, 'bo', alpha=0.4)\n",
|
||||||
"a0.plot([0,90], [0,0], 'r', lw=2)\n",
|
"a0.plot([0,90], [0,0], 'r', lw=2)\n",
|
||||||
"a0.set_ylabel('residue values', fontsize=14)\n",
|
"a0.set_ylabel('residue values', fontsize=14)\n",
|
||||||
"a0.set_xlabel('test data set', fontsize=14)\n",
|
"a0.set_xlabel('test data set', fontsize=14)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');\n",
|
"a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step')\n",
|
||||||
"a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);\n",
|
"a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10)\n",
|
||||||
"a1.set_yticklabels([])\n",
|
"a1.set_yticklabels([])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"plt.show()"
|
"plt.show()"
|
||||||
@@ -686,7 +684,7 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python [Python 3.6]",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python36"
|
"name": "python36"
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -176,8 +176,8 @@
|
|||||||
" # if no min node count is provided it will use the scale settings for the cluster\n",
|
" # if no min node count is provided it will use the scale settings for the cluster\n",
|
||||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||||
" \n",
|
" \n",
|
||||||
" # For a more detailed view of current AmlCompute status, use the 'status' property \n",
|
" # For a more detailed view of current AmlCompute status, use get_status()\n",
|
||||||
" print(compute_target.status.serialize())"
|
" print(compute_target.get_status().serialize())"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -26,7 +26,7 @@
|
|||||||
"> * Explore the results\n",
|
"> * Explore the results\n",
|
||||||
"> * Register the best model\n",
|
"> * Register the best model\n",
|
||||||
"\n",
|
"\n",
|
||||||
"If you don’t have an Azure subscription, create a [free account](https://aka.ms/AMLfree) before you begin. \n",
|
"If you don\u00e2\u20ac\u2122t have an Azure subscription, create a [free account](https://aka.ms/AMLfree) before you begin. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"> Code in this article was tested with Azure Machine Learning SDK version 1.0.0\n",
|
"> Code in this article was tested with Azure Machine Learning SDK version 1.0.0\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -55,8 +55,6 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"import pandas as pd\n",
|
"import pandas as pd\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl.run import AutoMLRun\n",
|
|
||||||
"import time\n",
|
|
||||||
"import logging"
|
"import logging"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -93,7 +91,8 @@
|
|||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
"output['Project Directory'] = project_folder\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data=output, index=['']).T"
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -112,7 +111,6 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import azureml.dataprep as dprep\n",
|
"import azureml.dataprep as dprep\n",
|
||||||
"import os\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"file_path = os.path.join(os.getcwd(), \"dflows.dprep\")\n",
|
"file_path = os.path.join(os.getcwd(), \"dflows.dprep\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -308,7 +306,6 @@
|
|||||||
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
|
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
|
||||||
" metricslist[int(properties['iteration'])] = metrics\n",
|
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||||
"\n",
|
"\n",
|
||||||
"import pandas as pd\n",
|
|
||||||
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
||||||
"rundata"
|
"rundata"
|
||||||
]
|
]
|
||||||
@@ -351,7 +348,7 @@
|
|||||||
"description = 'Automated Machine Learning Model'\n",
|
"description = 'Automated Machine Learning Model'\n",
|
||||||
"tags = None\n",
|
"tags = None\n",
|
||||||
"local_run.register_model(description=description, tags=tags)\n",
|
"local_run.register_model(description=description, tags=tags)\n",
|
||||||
"local_run.model_id # Use this id to deploy the model as a web service in Azure"
|
"print(local_run.model_id) # Use this id to deploy the model as a web service in Azure"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
Reference in New Issue
Block a user