Compare commits

...

37 Commits

Author SHA1 Message Date
Roope Astala
644729e5db Merge pull request #333 from rastala/master
version 1.0.30
2019-04-22 15:40:11 -04:00
Roope Astala
e2b1b3fcaa version 1.0.30 2019-04-22 15:39:18 -04:00
Roope Astala
dc692589a9 Merge pull request #326 from rastala/master
update aks notebook
2019-04-18 16:19:51 -04:00
Roope Astala
624b4595b5 update aks notebook 2019-04-18 16:18:33 -04:00
Roope Astala
0ed85c33c2 Delete release.json 2019-04-18 10:01:50 -04:00
Roope Astala
5b01de605f Merge pull request #318 from savitamittal1/hdinotebook
Sample HDI notebook
2019-04-18 10:01:26 -04:00
Savitam
c351ac988a Sample HDI notebook
sample HDI notebook
2019-04-15 12:35:34 -07:00
Josée Martens
759ec3934c Delete yt_cover.png 2019-04-15 12:06:25 -05:00
Josée Martens
b499b88a85 Delete python36.png 2019-04-15 12:06:16 -05:00
Josée Martens
5f4edac3c1 Update NBSETUP.md 2019-04-15 12:00:31 -05:00
Josée Martens
edfce0d936 Update README.md 2019-04-12 17:28:16 -05:00
Josée Martens
1516c7fc24 Update README.md
testing for search
2019-04-12 17:19:55 -05:00
Roope Astala
389fb668ce Add files via upload 2019-04-10 11:12:55 -04:00
Josée Martens
647d5e72a5 Merge pull request #307 from Azure/vizhur-patch-2
Create googled8147fb6c0788258.html
2019-04-09 15:21:51 -05:00
vizhur
43ac4c84bb Create googled8147fb6c0788258.html 2019-04-09 16:19:47 -04:00
Roope Astala
8a1a82b50a Merge pull request #303 from rastala/master
dockerfile and missing config update
2019-04-08 15:38:13 -04:00
Roope Astala
72f386298c dockerfile and missing config update 2019-04-08 15:37:48 -04:00
Roope Astala
41d697e298 Merge pull request #302 from rastala/master
version 1.0.23
2019-04-08 15:35:50 -04:00
Roope Astala
c3ce932029 version 1.0.23 2019-04-08 15:34:51 -04:00
Roope Astala
a956162114 Merge pull request #290 from rastala/master
update aks deployment notebook
2019-04-03 10:53:51 -04:00
Roope Astala
cb5a178e40 Merge branch 'master' of github.com:rastala/MachineLearningNotebooks 2019-04-03 10:52:40 -04:00
Roope Astala
d81c336c59 update production deploy to aks 2019-04-03 10:52:15 -04:00
Roope Astala
4244a24d81 Merge pull request #287 from jeff-shepherd/master
Fixed line termination on automl_setup_linux.sh
2019-04-03 09:21:35 -04:00
Jeff Shepherd
3b488555e5 Added back automl_setup_linux.sh with correct line termination 2019-04-02 16:24:05 -07:00
Jeff Shepherd
6abc478f33 Removed automl_setup_linux.sh 2019-04-02 16:23:11 -07:00
Roope Astala
666c2579eb Merge pull request #285 from jeff-shepherd/master
Corrected line termination for automl_setup_mac.sh
2019-04-02 09:19:53 -04:00
Jeff Shepherd
5af3aa4231 Fixed line termination 2019-04-01 16:19:00 -07:00
Jeff Shepherd
e48d828ab0 Removed automl_setup_mac.sh 2019-04-01 16:17:56 -07:00
Jeff Shepherd
44aa636c21 Merge branch 'master' of https://github.com/Azure/MachineLearningNotebooks 2019-04-01 16:07:11 -07:00
Jeff Shepherd
4678f9adc3 Merge branch 'master' of https://github.com/jeff-shepherd/MachineLearningNotebooks 2019-04-01 16:04:46 -07:00
Jeff Shepherd
5bf85edade Added automl_setup_mac.sh with correct line termination 2019-04-01 16:03:39 -07:00
Jeff Shepherd
94f381e884 Removed automl_setup_mac.sh 2019-04-01 16:02:53 -07:00
Roope Astala
ea1b7599c3 Merge pull request #267 from rastala/master
add automl files
2019-03-25 19:26:07 -04:00
Roope Astala
6b8a6befde add automl files 2019-03-25 19:25:38 -04:00
Roope Astala
c1511b7b74 Merge pull request #266 from rastala/master
1.0.21 dockerfile
2019-03-25 15:10:05 -04:00
Roope Astala
8f007a3333 1.0.21 dockerfile 2019-03-25 15:09:39 -04:00
Jeff Shepherd
18a11bbd8d Added model printing example 2019-03-18 16:31:48 -07:00
63 changed files with 7859 additions and 4391 deletions

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.21"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.21" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -0,0 +1,29 @@
FROM continuumio/miniconda:4.5.11
# install git
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# create a new conda environment named azureml
RUN conda create -n azureml -y -q Python=3.6
# install additional packages used by sample notebooks. this is optional
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
# install azurmel-sdk components
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.23"]
# clone Azure ML GitHub sample notebooks
RUN cd /home && git clone -b "azureml-sdk-1.0.23" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
# generate jupyter configuration file
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
# set an emtpy token for Jupyter to remove authentication.
# this is NOT recommended for production environment
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
# open up port 8887 on the container
EXPOSE 8887
# start Jupyter notebook server on port 8887 when the container starts
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"

View File

@@ -96,7 +96,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.0.21 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.0.23 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -189,6 +189,11 @@ jupyter notebook
- Dataset: [Dominick's grocery sales of orange juice](forecasting-b/dominicks_OJ.csv)
- Example of training an AutoML forecasting model on multiple time-series
- [auto-ml-classification-with-onnx.ipynb](classification-with-onnx/auto-ml-classification-with-onnx.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification with ONNX models
- Uses local compute for training
<a name="documentation"></a>
See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments.
@@ -206,10 +211,18 @@ The main code of the file must be indented so that it is under this condition.
<a name="troubleshooting"></a>
# Troubleshooting
## automl_setup fails
1. On windows, make sure that you are running automl_setup from an Anconda Prompt window rather than a regular cmd window. You can launch the "Anaconda Prompt" window by hitting the Start button and typing "Anaconda Prompt". If you don't see the application "Anaconda Prompt", you might not have conda or mini conda installed. In that case, you can install it [here](https://conda.io/miniconda.html)
1. On Windows, make sure that you are running automl_setup from an Anconda Prompt window rather than a regular cmd window. You can launch the "Anaconda Prompt" window by hitting the Start button and typing "Anaconda Prompt". If you don't see the application "Anaconda Prompt", you might not have conda or mini conda installed. In that case, you can install it [here](https://conda.io/miniconda.html)
2. Check that you have conda 64-bit installed rather than 32-bit. You can check this with the command `conda info`. The `platform` should be `win-64` for Windows or `osx-64` for Mac.
3. Check that you have conda 4.4.10 or later. You can check the version with the command `conda -V`. If you have a previous version installed, you can update it using the command: `conda update conda`.
4. Pass a new name as the first parameter to automl_setup so that it creates a new conda environment. You can view existing conda environments using `conda env list` and remove them with `conda env remove -n <environmentname>`.
4. On Linux, if the error is `gcc: error trying to exec 'cc1plus': execvp: No such file or directory`, install build essentials using the command `sudo apt-get install build-essential`.
5. Pass a new name as the first parameter to automl_setup so that it creates a new conda environment. You can view existing conda environments using `conda env list` and remove them with `conda env remove -n <environmentname>`.
## automl_setup_linux.sh fails
If automl_setup_linux.sh fails on Ubuntu Linux with the error: `unable to execute 'gcc': No such file or directory`
1. Make sure that outbound ports 53 and 80 are enabled. On an Azure VM, you can do this from the Azure Portal by selecting the VM and clicking on Networking.
2. Run the command: `sudo apt-get update`
3. Run the command: `sudo apt-get install build-essential --fix-missing`
4. Run `automl_setup_linux.sh` again.
## configuration.ipynb fails
1) For local conda, make sure that you have susccessfully run automl_setup first.
@@ -233,6 +246,13 @@ If a sample notebook fails with an error that property, method or library does n
## Numpy import fails on Windows
Some Windows environments see an error loading numpy with the latest Python version 3.6.8. If you see this issue, try with Python version 3.6.7.
## Numpy import fails
Check the tensorflow version in the automated ml conda environment. Supported versions are < 1.13. Uninstall tensorflow from the environment if version is >= 1.13
You may check the version of tensorflow and uninstall as follows
1) start a command shell, activate conda environment where automated ml packages are installed
2) enter `pip freeze` and look for `tensorflow` , if found, the version listed should be < 1.13
3) If the listed version is a not a supported version, `pip uninstall tensorflow` in the command shell and enter y for confirmation.
## Remote run: DsvmCompute.create fails
There are several reasons why the DsvmCompute.create can fail. The reason is usually in the error message but you have to look at the end of the error message for the detailed reason. Some common reasons are:
1) `Compute name is invalid, it should start with a letter, be between 2 and 16 character, and only include letters (a-zA-Z), numbers (0-9) and \'-\'.` Note that underscore is not allowed in the name.

View File

@@ -0,0 +1,22 @@
name: azure_automl
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python>=3.5.2,<3.6.8
- nb_conda
- matplotlib==2.1.0
- numpy>=1.11.0,<1.15.0
- cython
- urllib3<1.24
- scipy>=1.0.0,<=1.1.0
- scikit-learn>=0.18.0,<=0.19.1
- pandas>=0.22.0,<0.23.0
- tensorflow>=1.12.0
- py-xgboost<=0.80
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-sdk[automl,explain]
- azureml-widgets
- pandas_ml

View File

@@ -0,0 +1,23 @@
name: azure_automl
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python>=3.5.2,<3.6.8
- nb_conda
- matplotlib==2.1.0
- numpy>=1.15.3
- cython
- urllib3<1.24
- scipy>=1.0.0,<=1.1.0
- scikit-learn>=0.18.0,<=0.19.1
- pandas>=0.22.0,<0.23.0
- tensorflow>=1.12.0
- py-xgboost<=0.80
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-sdk[automl,explain]
- azureml-widgets
- pandas_ml

View File

@@ -0,0 +1,51 @@
@echo off
set conda_env_name=%1
set automl_env_file=%2
set options=%3
set PIP_NO_WARN_SCRIPT_LOCATION=0
IF "%conda_env_name%"=="" SET conda_env_name="azure_automl"
IF "%automl_env_file%"=="" SET automl_env_file="automl_env.yml"
IF NOT EXIST %automl_env_file% GOTO YmlMissing
call conda activate %conda_env_name% 2>nul:
if not errorlevel 1 (
echo Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment %conda_env_name%
call pip install --upgrade azureml-sdk[automl,notebooks,explain]
if errorlevel 1 goto ErrorExit
) else (
call conda env create -f %automl_env_file% -n %conda_env_name%
)
call conda activate %conda_env_name% 2>nul:
if errorlevel 1 goto ErrorExit
call python -m ipykernel install --user --name %conda_env_name% --display-name "Python (%conda_env_name%)"
REM azureml.widgets is now installed as part of the pip install under the conda env.
REM Removing the old user install so that the notebooks will use the latest widget.
call jupyter nbextension uninstall --user --py azureml.widgets
echo.
echo.
echo ***************************************
echo * AutoML setup completed successfully *
echo ***************************************
IF NOT "%options%"=="nolaunch" (
echo.
echo Starting jupyter notebook - please run the configuration notebook
echo.
jupyter notebook --log-level=50 --notebook-dir='..\..'
)
goto End
:YmlMissing
echo File %automl_env_file% not found.
:ErrorExit
echo Install failed
:End

View File

@@ -0,0 +1,52 @@
#!/bin/bash
CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if [ "$AUTOML_ENV_FILE" == "" ]
then
AUTOML_ENV_FILE="automl_env.yml"
fi
if [ ! -f $AUTOML_ENV_FILE ]; then
echo "File $AUTOML_ENV_FILE not found"
exit 1
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension uninstall --user --py azureml.widgets &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
if [ "$OPTIONS" != "nolaunch" ]
then
echo "" &&
echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

View File

@@ -0,0 +1,55 @@
#!/bin/bash
CONDA_ENV_NAME=$1
AUTOML_ENV_FILE=$2
OPTIONS=$3
PIP_NO_WARN_SCRIPT_LOCATION=0
if [ "$CONDA_ENV_NAME" == "" ]
then
CONDA_ENV_NAME="azure_automl"
fi
if [ "$AUTOML_ENV_FILE" == "" ]
then
AUTOML_ENV_FILE="automl_env_mac.yml"
fi
if [ ! -f $AUTOML_ENV_FILE ]; then
echo "File $AUTOML_ENV_FILE not found"
exit 1
fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
conda install lightgbm -c conda-forge -y &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension uninstall --user --py azureml.widgets &&
pip install numpy==1.15.3 &&
echo "" &&
echo "" &&
echo "***************************************" &&
echo "* AutoML setup completed successfully *" &&
echo "***************************************" &&
if [ "$OPTIONS" != "nolaunch" ]
then
echo "" &&
echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
fi
if [ $? -gt 0 ]
then
echo "Installation failed"
fi

View File

@@ -139,7 +139,6 @@
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 20,\n",
" iterations = 10,\n",
" n_cross_validations = 2,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
@@ -263,7 +262,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To ensure the fit results are consistent with the training results, the SDK dependency versions need to be the same as the environment that trains the model. Details about retrieving the versions can be found in notebook [12.auto-ml-retrieve-the-training-sdk-versions](12.auto-ml-retrieve-the-training-sdk-versions.ipynb)."
"To ensure the fit results are consistent with the training results, the SDK dependency versions need to be the same as the environment that trains the model. The following cells create a file, myenv.yml, which specifies the dependencies from the run."
]
},
{
@@ -303,7 +302,8 @@
"source": [
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])\n",
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
" pip_packages=['azureml-sdk[automl]'])\n",
"\n",
"conda_env_file_name = 'myenv.yml'\n",
"myenv.save_to_file('.', conda_env_file_name)"

View File

@@ -0,0 +1,284 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning\n",
"_**Classification with Local Compute**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"Please find the ONNX related documentations [here](https://github.com/onnx/onnx).\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute with ONNX compatible config on.\n",
"4. Explore the results and save the ONNX model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-classification-onnx'\n",
"project_folder = './sample_projects/automl-classification-onnx'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data\n",
"\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",
"X_train = digits.data[100:,:]\n",
"y_train = digits.target[100:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train with enable ONNX compatible models config on\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"Set the parameter enable_onnx_compatible_models=True, if you also want to generate the ONNX compatible models. Please note, the forecasting task and TensorFlow models are not ONNX compatible yet.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**enable_onnx_compatible_models**|Enable the ONNX compatible models in the experiment.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 60,\n",
" iterations = 10,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" enable_onnx_compatible_models=True,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best ONNX Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.\n",
"\n",
"Set the parameter return_onnx_model=True to retrieve the best ONNX model, instead of the Python model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, onnx_mdl = local_run.get_output(return_onnx_model=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Save the best ONNX model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.automl._vendor.automl.client.core.common.onnx_convert import OnnxConverter\n",
"onnx_fl_path = \"./best_model.onnx\"\n",
"OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -71,11 +71,17 @@
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"try:\n",
" import tensorflow as tf1\n",
"except ImportError:\n",
" from pip._internal import main\n",
" main(['install', 'tensorflow>=1.10.0,<=1.12.0'])\n",
"import sys\n",
"whitelist_models=[\"LightGBM\"]\n",
"if \"3.7\" != sys.version[0:3]:\n",
" try:\n",
" import tensorflow as tf1\n",
" except ImportError:\n",
" from pip._internal import main\n",
" main(['install', 'tensorflow>=1.10.0,<=1.12.0'])\n",
" logging.getLogger().setLevel(logging.ERROR)\n",
" whitelist_models=[\"TensorFlowLinearClassifier\", \"TensorFlowDNN\"]\n",
"\n",
"from azureml.train.automl import AutoMLConfig"
]
},
@@ -160,12 +166,11 @@
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 60,\n",
" iterations = 10,\n",
" n_cross_validations = 3,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" enable_tf=True,\n",
" whitelist_models=[\"TensorFlowLinearClassifier\", \"TensorFlowDNN\"],\n",
" whitelist_models=whitelist_models,\n",
" path = project_folder)"
]
},

View File

@@ -72,6 +72,32 @@
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Accessing the Azure ML workspace requires authentication with Azure.\n",
"\n",
"The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.\n",
"\n",
"If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n",
"\n",
"```\n",
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
"auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')\n",
"ws = Workspace.from_config(auth = auth)\n",
"```\n",
"\n",
"If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n",
"\n",
"```\n",
"from azureml.core.authentication import ServicePrincipalAuthentication\n",
"auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')\n",
"ws = Workspace.from_config(auth = auth)\n",
"```\n",
"For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -133,12 +159,10 @@
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|<i>Exit Criteria [optional]</i><br><br>iterations<br>experiment_timeout_minutes|An optional duration parameter that says how long AutoML should be run.<br>This could be either number of iterations or number of minutes AutoML is allowed to run. <br><br><i>iterations</i> number of algorithm iterations to run<br><i>experiment_timeout_minutes</i> is the number of minutes that AutoML should run<br><br>By default, this is set to stop whenever AutoML determines that progress in scores is not being made|"
]
},
{
@@ -148,15 +172,10 @@
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 60,\n",
" iterations = 25,\n",
" n_cross_validations = 3,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" path = project_folder)"
" n_cross_validations = 3)"
]
},
{

View File

@@ -163,8 +163,7 @@
" \"iterations\" : 2,\n",
" \"primary_metric\" : 'AUC_weighted',\n",
" \"preprocess\" : False,\n",
" \"verbosity\" : logging.INFO,\n",
" \"n_cross_validations\": 3\n",
" \"verbosity\" : logging.INFO\n",
"}"
]
},

View File

@@ -37,7 +37,8 @@
"2. Instantiating AutoMLConfig with new task type \"forecasting\" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: \"time_column_name\" \n",
"3. Training the Model using local compute\n",
"4. Exploring the results\n",
"5. Testing the fitted model"
"5. Viewing the engineered names for featurized data and featurization summary for all raw features\n",
"6. Testing the fitted model"
]
},
{
@@ -126,7 +127,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split the data to train and test\n",
"### Get the train data\n",
"\n"
]
},
@@ -172,14 +173,10 @@
"metadata": {},
"outputs": [],
"source": [
"X_train = train[train['timeStamp'] < '2017-01-01']\n",
"X_valid = train[train['timeStamp'] >= '2017-01-01']\n",
"X_train = train\n",
"y_train = X_train.pop('demand').values\n",
"y_valid = X_valid.pop('demand').values\n",
"print(X_train.shape)\n",
"print(y_train.shape)\n",
"print(X_valid.shape)\n",
"print(y_valid.shape)"
"print(y_train.shape)"
]
},
{
@@ -198,8 +195,7 @@
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
"|**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], targets values.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
]
},
@@ -222,8 +218,7 @@
" iteration_timeout_minutes = 5,\n",
" X = X_train,\n",
" y = y_train,\n",
" X_valid = X_valid,\n",
" y_valid = y_valid,\n",
" n_cross_validations = 2,\n",
" path=project_folder,\n",
" verbosity = logging.INFO,\n",
" **automl_settings)"
@@ -273,6 +268,45 @@
"fitted_model.steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View the engineered names for featurized data\n",
"Below we display the engineered feature names generated for the featurized data using the time-series featurization."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View the featurization summary\n",
"Below we display the featurization that was performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:-\n",
"- Raw feature name\n",
"- Number of engineered features formed out of this raw feature\n",
"- Type detected\n",
"- If feature was dropped\n",
"- List of feature transformations for the raw feature"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['timeseriestransformer'].get_featurization_summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -36,7 +36,8 @@
"1. Create an Experiment in an existing Workspace\n",
"2. Instantiate an AutoMLConfig \n",
"3. Find and train a forecasting model using local compute\n",
"4. Evaluate the performance of the model\n",
"4. Viewing the engineered names for featurized data and featurization summary for all raw features\n",
"5. Evaluate the performance of the model\n",
"\n",
"The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area."
]
@@ -320,6 +321,45 @@
"fitted_pipeline.steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View the engineered names for featurized data\n",
"Below we display the engineered feature names generated for the featurized data using the time-series featurization."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_pipeline.named_steps['timeseriestransformer'].get_engineered_feature_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View the featurization summary\n",
"Below we display the featurization that was performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:-\n",
"- Raw feature name\n",
"- Number of engineered features formed out of this raw feature\n",
"- Type detected\n",
"- If feature was dropped\n",
"- List of feature transformations for the raw feature"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_pipeline.named_steps['timeseriestransformer'].get_featurization_summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -37,8 +37,9 @@
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model.\n",
"5. Explore the results.\n",
"3. Train the model.\n",
"4. Explore the results.\n",
"5. Viewing the engineered names for featurized data and featurization summary for all raw features.\n",
"6. Test the best fitted model.\n",
"\n",
"In addition this notebook showcases the following features\n",
@@ -154,7 +155,6 @@
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.|\n",
"|**experiment_exit_score**|*double* value indicating the target for *primary_metric*. <br>Once the target is surpassed the run terminates.|\n",
"|**blacklist_models**|*List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGD</i><br><i>MultinomialNaiveBayes</i><br><i>BernoulliNaiveBayes</i><br><i>SVM</i><br><i>LinearSVM</i><br><i>KNN</i><br><i>DecisionTree</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>GradientBoosting</i><br><i>TensorFlowDNN</i><br><i>TensorFlowLinearClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i>|\n",
@@ -174,7 +174,6 @@
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 60,\n",
" iterations = 20,\n",
" n_cross_validations = 5,\n",
" preprocess = True,\n",
" experiment_exit_score = 0.9984,\n",
" blacklist_models = ['KNN','LinearSVM'],\n",
@@ -318,6 +317,45 @@
"# best_run, fitted_model = local_run.get_output(iteration = iteration)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View the engineered names for featurized data\n",
"Below we display the engineered feature names generated for the featurized data using the preprocessing featurization."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['datatransformer'].get_engineered_feature_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View the featurization summary\n",
"Below we display the featurization that was performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:-\n",
"- Raw feature name\n",
"- Number of engineered features formed out of this raw feature\n",
"- Type detected\n",
"- If feature was dropped\n",
"- List of feature transformations for the raw feature"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['datatransformer'].get_featurization_summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -305,7 +305,7 @@
"from azureml.train.automl.automlexplainer import explain_model\n",
"\n",
"shap_values, expected_values, overall_summary, overall_imp, per_class_summary, per_class_imp = \\\n",
" explain_model(fitted_model, X_train, X_test)"
" explain_model(fitted_model, X_train, X_test, features=features)"
]
},
{

View File

@@ -40,7 +40,8 @@
"3. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model using the DSVM.\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"6. Viewing the engineered names for featurized data and featurization summary for all raw features.\n",
"7. Test the best fitted model.\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Parallel** executions for iterations\n",
@@ -160,6 +161,7 @@
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"import pkg_resources\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
@@ -167,7 +169,9 @@
"# Set compute target to the Linux DSVM\n",
"conda_run_config.target = dsvm_compute\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80'])\n",
"pandas_dependency = 'pandas==' + pkg_resources.get_distribution(\"pandas\").version\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80',pandas_dependency])\n",
"conda_run_config.environment.python.conda_dependencies = cd"
]
},
@@ -407,6 +411,45 @@
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View the engineered names for featurized data\n",
"Below we display the engineered feature names generated for the featurized data using the preprocessing featurization."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['datatransformer'].get_engineered_feature_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View the featurization summary\n",
"Below we display the featurization that was performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:-\n",
"- Raw feature name\n",
"- Number of engineered features formed out of this raw feature\n",
"- Type detected\n",
"- If feature was dropped\n",
"- List of feature transformations for the raw feature"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['datatransformer'].get_featurization_summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -245,6 +245,7 @@
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"import pkg_resources\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
@@ -254,7 +255,9 @@
"# set the data reference of the run coonfiguration\n",
"conda_run_config.data_references = {ds.name: dr}\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80'])\n",
"pandas_dependency = 'pandas==' + pkg_resources.get_distribution(\"pandas\").version\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80',pandas_dependency])\n",
"conda_run_config.environment.python.conda_dependencies = cd"
]
},

View File

@@ -23,7 +23,8 @@
"3. Configure Automated ML using `AutoMLConfig`.\n",
"4. Train the model using Azure Databricks.\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"6. Viewing the engineered names for featurized data and featurization summary for all raw features.\n",
"7. Test the best fitted model.\n",
"\n",
"Before running this notebook, please follow the <a href=\"https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks\" target=\"_blank\">readme for using Automated ML on Azure Databricks</a> for installing necessary libraries to your cluster."
]
@@ -556,6 +557,45 @@
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View the engineered names for featurized data\n",
"Below we display the engineered feature names generated for the featurized data using the preprocessing featurization."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['datatransformer'].get_engineered_feature_names()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View the featurization summary\n",
"Below we display the featurization that was performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:-\n",
"- Raw feature name\n",
"- Number of engineered features formed out of this raw feature\n",
"- Type detected\n",
"- If feature was dropped\n",
"- List of feature transformations for the raw feature"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps['datatransformer'].get_featurization_summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -207,6 +207,7 @@
"import os\n",
"import random\n",
"import time\n",
"import json\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
@@ -295,7 +296,7 @@
" datastore_name = datastore_name, \n",
" container_name = container_name, \n",
" account_name = account_name,\n",
" overwrite = True\n",
" overwrite = True\n",
")"
]
},
@@ -427,7 +428,7 @@
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 10,\n",
" iterations = 30,\n",
" iterations = 5,\n",
" preprocess = True,\n",
" n_cross_validations = 10,\n",
" max_concurrent_iterations = 2, #change it based on number of worker nodes\n",
@@ -591,22 +592,21 @@
"%%writefile score.py\n",
"import pickle\n",
"import json\n",
"import numpy\n",
"import numpy as np\n",
"import azureml.train.automl\n",
"from sklearn.externals import joblib\n",
"from azureml.core.model import Model\n",
"\n",
"import pandas as pd\n",
"\n",
"def init():\n",
" global model\n",
" model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy\n",
" model_path = Model.get_model_path(model_name = '<<model_id>>') # this name is model.id of model that we want to deploy\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
"def run(rawdata):\n",
"def run(raw_data):\n",
" try:\n",
" data = json.loads(rawdata)['data']\n",
" data = numpy.array(data)\n",
" data = (pd.DataFrame(np.array(json.loads(raw_data)['data']), columns=[str(i) for i in range(0,64)]))\n",
" result = model.predict(data)\n",
" except Exception as e:\n",
" result = str(e)\n",
@@ -614,6 +614,22 @@
" return json.dumps({\"result\":result.tolist()})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Replace <<model_id>>\n",
"content = \"\"\n",
"with open(\"score.py\", \"r\") as fo:\n",
" content = fo.read()\n",
"\n",
"new_content = content.replace(\"<<model_id>>\", local_run.model_id)\n",
"with open(\"score.py\", \"w\") as fw:\n",
" fw.write(new_content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -672,16 +688,19 @@
"metadata": {},
"outputs": [],
"source": [
"\n",
"# this will take 10-15 minutes to finish\n",
"\n",
"service_name = \"<<servicename>>\"\n",
"import uuid\n",
"from azureml.core.image import ContainerImage\n",
"\n",
"guid = str(uuid.uuid4()).split(\"-\")[0]\n",
"service_name = \"myservice-{}\".format(guid)\n",
"print(\"Creating service with name: {}\".format(service_name))\n",
"runtime = \"spark-py\" \n",
"driver_file = \"score.py\"\n",
"my_conda_file = \"mydeployenv.yml\"\n",
"\n",
"# image creation\n",
"from azureml.core.image import ContainerImage\n",
"myimage_config = ContainerImage.image_configuration(execution_script = driver_file, \n",
" runtime = runtime, \n",
" conda_file = 'mydeployenv.yml')\n",
@@ -744,18 +763,39 @@
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" test_sample = json.dumps({'data':X_test[index:index + 1].values.tolist()})\n",
" predicted = myservice.run(input_data = test_sample)\n",
" label = y_test.values[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" predictedDict = json.loads(predicted)\n",
" title = \"Label value = %d Predicted value = %s \" % ( label,predictedDict['result'][0]) \n",
" fig = plt.figure(3, figsize = (5,5))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" display(fig)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"### Delete the service"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"myservice.delete()"
]
}
],
"metadata": {

View File

@@ -0,0 +1,53 @@
**Azure HDInsight**
Azure HDInsight is a fully managed cloud Hadoop & Spark offering the gives
optimized open-source analytic clusters for Spark, Hive, MapReduce, HBase,
Storm, and Kafka. HDInsight Spark clusters provide kernels that you can use with
the Jupyter notebook on [Apache Spark](https://spark.apache.org/) for testing
your applications. 
How Azure HDInsight works with Azure Machine Learning service
- You can train a model using Spark clusters and deploy the model to ACI/AKS
from within Azure HDInsight.
- You can also use [automated machine
learning](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-automated-ml) capabilities
integrated within Azure HDInsight.
You can use Azure HDInsight as a compute target from an [Azure Machine Learning
pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines).
**Set up your HDInsight cluster**
Create [HDInsight
cluster](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters)
**Quick create: Basic cluster setup**
This article walks you through setup in the [Azure
portal](https://portal.azure.com/), where you can create an HDInsight cluster
using *Quick create* or *Custom*.
![hdinsight create options custom quick create](media/0a235b34c0b881117e51dc31a232dbe1.png)
Follow instructions on the screen to do a basic cluster setup. Details are
provided below for:
- [Resource group
name](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters#resource-group-name)
- [Cluster types and
configuration](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters#cluster-types)
(Cluster must be Spark 2.3 (HDI 3.6) or greater)
- Cluster login and SSH username
- [Location](https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters#location)
**Import the sample HDI notebook in Jupyter**
**Important links:**
Create HDI cluster:
<https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters>

View File

@@ -0,0 +1,624 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated ML on Azure HDInsight\n",
"\n",
"In this example we use the scikit-learn's <a href=\"http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset\" target=\"_blank\">digit dataset</a> to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create Azure Machine Learning Workspace object and initialize your notebook directory to easily reload this object from a configuration file.\n",
"2. Create an `Experiment` in an existing `Workspace`.\n",
"3. Configure Automated ML using `AutoMLConfig`.\n",
"4. Train the model using Azure HDInsight.\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"\n",
"Before running this notebook, please follow the readme for using Automated ML on Azure HDI for installing necessary libraries to your cluster."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import pandas as pd\n",
"from azureml.core.authentication import ServicePrincipalAuthentication\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun\n",
"import logging\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import pandas as pd\n",
"from azureml.core.authentication import ServicePrincipalAuthentication\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun\n",
"import logging\n",
"\n",
"subscription_id = \"<Your SubscriptionId>\" #you should be owner or contributor\n",
"resource_group = \"<Resource group - new or existing>\" #you should be owner or contributor\n",
"workspace_name = \"<workspace to be created>\" #your workspace name\n",
"workspace_region = \"<azureregion>\" #your region\n",
"\n",
"\n",
"tenant_id = \"<tenant_id>\"\n",
"app_id = \"<app_id>\"\n",
"app_key = \"<app_key>\"\n",
"\n",
"auth_sp = ServicePrincipalAuthentication(tenant_id = tenant_id,\n",
" service_principal_id = app_id,\n",
" service_principal_password = app_key)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##TESTONLY\n",
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" auth = auth_sp,\n",
" exist_ok=True)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group,\n",
" auth = auth_sp)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Folder to Host Sample Projects\n",
"Finally, create a folder where all the sample projects will be hosted."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"sample_projects_folder = './sample_projects'\n",
"\n",
"if not os.path.isdir(sample_projects_folder):\n",
" os.mkdir(sample_projects_folder)\n",
" \n",
"print('Sample projects will be created in {}.'.format(sample_projects_folder))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"import time\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-classification-hdi'\n",
"project_folder = './sample_projects/automl-local-classification-hdi'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Registering Datastore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Datastore is the way to save connection information to a storage service (e.g. Azure Blob, Azure Data Lake, Azure SQL) information to your workspace so you can access them without exposing credentials in your code. The first thing you will need to do is register a datastore, you can refer to our [python SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) on how to register datastores. __Note: for best security practices, please do not check in code that contains registering datastores with secrets into your source control__\n",
"\n",
"The code below registers a datastore pointing to a publicly readable blob container."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Datastore\n",
"\n",
"datastore_name = 'demo_training'\n",
"container_name = 'digits' \n",
"account_name = 'automlpublicdatasets'\n",
"Datastore.register_azure_blob_container(\n",
" workspace = ws, \n",
" datastore_name = datastore_name, \n",
" container_name = container_name, \n",
" account_name = account_name,\n",
" overwrite = True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below is an example on how to register a private blob container\n",
"```python\n",
"datastore = Datastore.register_azure_blob_container(\n",
" workspace = ws, \n",
" datastore_name = 'example_datastore', \n",
" container_name = 'example-container', \n",
" account_name = 'storageaccount',\n",
" account_key = 'accountkey'\n",
")\n",
"```\n",
"The example below shows how to register an Azure Data Lake store. Please make sure you have granted the necessary permissions for the service principal to access the data lake.\n",
"```python\n",
"datastore = Datastore.register_azure_data_lake(\n",
" workspace = ws,\n",
" datastore_name = 'example_datastore',\n",
" store_name = 'adlsstore',\n",
" tenant_id = 'tenant-id-of-service-principal',\n",
" client_id = 'client-id-of-service-principal',\n",
" client_secret = 'client-secret-of-service-principal'\n",
")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Training Data Using DataPrep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Automated ML takes a Dataflow as input.\n",
"\n",
"If you are familiar with Pandas and have done your data preparation work in Pandas already, you can use the `read_pandas_dataframe` method in dprep to convert the DataFrame to a Dataflow.\n",
"```python\n",
"df = pd.read_csv(...)\n",
"# apply some transforms\n",
"dprep.read_pandas_dataframe(df, temp_folder='/path/accessible/by/both/driver/and/worker')\n",
"```\n",
"\n",
"If you just need to ingest data without doing any preparation, you can directly use AzureML Data Prep (Data Prep) to do so. The code below demonstrates this scenario. Data Prep also has data preparation capabilities, we have many [sample notebooks](https://github.com/Microsoft/AMLDataPrepDocs) demonstrating the capabilities.\n",
"\n",
"You will get the datastore you registered previously and pass it to Data Prep for reading. The data comes from the digits dataset: `sklearn.datasets.load_digits()`. `DataPath` points to a specific location within a datastore. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.dataprep as dprep\n",
"from azureml.data.datapath import DataPath\n",
"\n",
"datastore = Datastore.get(workspace = ws, datastore_name = datastore_name)\n",
"\n",
"X_train = dprep.read_csv(datastore.path('X.csv'))\n",
"y_train = dprep.read_csv(datastore.path('y.csv')).to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Review the Data Preparation Result\n",
"You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only j records for all the steps in the Dataflow, which makes it fast even against large datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train.get_profile()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_train.get_profile()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**spark_context**|Spark Context object. for HDInsight, use spark_context=sc|\n",
"|**max_concurrent_iterations**|Maximum number of iterations to execute in parallel. This should be <= number of worker nodes in your Azure HDInsight cluster.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"|**preprocess**|set this to True to enable pre-processing of data eg. string to numeric using one-hot encoding|\n",
"|**exit_score**|Target score for experiment. It is associated with the metric. eg. exit_score=0.995 will exit experiment after that|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 10,\n",
" iterations = 3,\n",
" preprocess = True,\n",
" n_cross_validations = 10,\n",
" max_concurrent_iterations = 2, #change it based on number of worker nodes\n",
" verbosity = logging.INFO,\n",
" spark_context=sc, #HDI /spark related\n",
" X = X_train, \n",
" y = y_train,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following will show the child runs and waits for the parent run to complete."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve All Child Runs after the experiment is completed (in portal)\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model after the above run is complete \n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric after the above run is complete based on the child run\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"\n",
"#### Load Test Data - you can split the dataset beforehand & pass Train dataset to AutoML and use Test dataset to evaluate the best model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"blob_location = \"https://{}.blob.core.windows.net/{}\".format(account_name, container_name)\n",
"X_test = pd.read_csv(\"{}./X_valid.csv\".format(blob_location), header=0)\n",
"y_test = pd.read_csv(\"{}/y_valid.csv\".format(blob_location), header=0)\n",
"images = pd.read_csv(\"{}/images.csv\".format(blob_location), header=None)\n",
"images = np.reshape(images.values, (100,8,8))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will try to predict digits and see how our model works. This is just an example to show you."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test.values[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(3, figsize = (5,5))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" display(fig)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When deploying an automated ML trained model, please specify _pippackages=['azureml-sdk[automl]']_ in your CondaDependencies.\n",
"\n",
"Please refer to only the **Deploy** section in this notebook - <a href=\"https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-with-deployment\" target=\"_blank\">Deployment of Automated ML trained model</a>"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
},
{
"name": "sasum"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "Python",
"name": "Python36"
},
"language_info": {
"codemirror_mode": {
"name": "python",
"version": 3
},
"mimetype": "text/x-python",
"name": "pyspark3",
"pygments_lexer": "python3"
},
"name": "auto-ml-classification-local-adb",
"notebookId": 587284549713154
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -6,15 +6,18 @@ These tutorials show how to create and deploy Open Neural Network eXchange ([ONN
0. [Configure your Azure Machine Learning Workspace](../../../configuration.ipynb)
#### Obtain models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime Inference
1. [Handwritten Digit Classification (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb)
2. [Facial Expression Recognition (Emotion FER+)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb)
#### Obtain pretrained models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime
1. [MNIST - Handwritten Digit Classification with ONNX Runtime](onnx-inference-mnist-deploy.ipynb)
2. [Emotion FER+ - Facial Expression Recognition with ONNX Runtime](onnx-inference-facial-expression-recognition-deploy.ipynb)
#### Train model on Azure ML, convert to ONNX, and deploy with ONNX Runtime
3. [MNIST - Train using PyTorch and deploy with ONNX Runtime](onnx-train-pytorch-aml-deploy-mnist.ipynb)
#### Demo Notebooks from Microsoft Ignite 2018
Note that the following notebooks do not have evaluation sections for the models since they were deployed as part of a live demo. You can find the respective pre-processing and post-processing code linked from the ONNX Model Zoo Github pages ([ResNet](https://github.com/onnx/models/tree/master/models/image_classification/resnet), [TinyYoloV2](https://github.com/onnx/models/tree/master/tiny_yolov2)), or experiment with the ONNX models by [running them in the browser](https://microsoft.github.io/onnxjs-demo/#/).
3. [Image Recognition (ResNet50)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb)
4. [Convert Core ML Model to ONNX and deploy - Real Time Object Detection (TinyYOLO)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb)
4. [ResNet50 - Image Recognition with ONNX Runtime](onnx-modelzoo-aml-deploy-resnet50.ipynb)
5. [TinyYoloV2 - Convert from CoreML and deploy with ONNX Runtime](onnx-convert-aml-deploy-tinyyolo.ipynb)
## Documentation
- [ONNX Runtime Python API Documentation](http://aka.ms/onnxruntime-python)
@@ -22,7 +25,7 @@ Note that the following notebooks do not have evaluation sections for the models
## Related Articles
- [Building and Deploying ONNX Runtime Models](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx)
- [Azure AI Making AI Real for Business](https://aka.ms/aml-blog-overview)
- [Azure AI Making AI Real for Business](https://aka.ms/aml-blog-overview)
- [Whats new in Azure Machine Learning](https://aka.ms/aml-blog-whats-new)
## License

View File

@@ -0,0 +1,124 @@
# This is a modified version of https://github.com/pytorch/examples/blob/master/mnist/main.py which is
# licensed under BSD 3-Clause (https://github.com/pytorch/examples/blob/master/LICENSE)
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import os
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def train(args, model, device, train_loader, optimizer, epoch, output_dir):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(args, model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False, reduce=True).item() # sum up batch loss
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=5, metavar='N',
help='number of epochs to train (default: 5)')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate (default: 0.01)')
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--output-dir', type=str, default='outputs')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
output_dir = args.output_dir
os.makedirs(output_dir, exist_ok=True)
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=False,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
),
batch_size=args.test_batch_size, shuffle=True, **kwargs)
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch, output_dir)
test(args, model, device, test_loader)
# save model
dummy_input = torch.randn(1, 1, 28, 28, device=device)
model_path = os.path.join(output_dir, 'mnist.onnx')
torch.onnx.export(model, dummy_input, model_path)
if __name__ == '__main__':
main()

File diff suppressed because one or more lines are too long

View File

@@ -167,6 +167,31 @@
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Use a custom Docker image\n",
"\n",
"You can also specify a custom Docker image to be used as base image if you don't want to use the default base image provided by Azure ML. Please make sure the custom Docker image has Ubuntu >= 16.04, Conda >= 4.5.\\* and Python(3.5.\\* or 3.6.\\*).\n",
"\n",
"Only Supported for `ContainerImage`(from azureml.core.image) with `python` runtime.\n",
"```python\n",
"# use an image available in public Container Registry without authentication\n",
"image_config.base_image = \"mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda\"\n",
"\n",
"# or, use an image available in a private Container Registry\n",
"image_config.base_image = \"myregistry.azurecr.io/mycustomimage:1.0\"\n",
"image_config.base_image_registry.address = \"myregistry.azurecr.io\"\n",
"image_config.base_image_registry.username = \"username\"\n",
"image_config.base_image_registry.password = \"password\"\n",
"\n",
"# or, use an image built during training.\n",
"image_config.base_image = run.properties[\"AzureML.DerivedImageName\"]\n",
"```\n",
"You can get the address of training image from the properties of a Run object. Only new runs submitted with azureml-sdk>=1.0.22 to AMLCompute targets will have the 'AzureML.DerivedImageName' property. Instructions on how to get a Run can be found in [manage-runs](../../training/manage-runs/manage-runs.ipynb). \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -191,6 +216,56 @@
" provisioning_configuration = prov_config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create AKS Cluster in an existing virtual network (optional)\n",
"See code snippet below. Check the documentation [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-enable-virtual-network#use-azure-kubernetes-service) for more details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"'''\n",
"from azureml.core.compute import ComputeTarget, AksCompute\n",
"\n",
"# Create the compute configuration and set virtual network information\n",
"config = AksCompute.provisioning_configuration(location=\"eastus2\")\n",
"config.vnet_resourcegroup_name = \"mygroup\"\n",
"config.vnet_name = \"mynetwork\"\n",
"config.subnet_name = \"default\"\n",
"config.service_cidr = \"10.0.0.0/16\"\n",
"config.dns_service_ip = \"10.0.0.10\"\n",
"config.docker_bridge_cidr = \"172.17.0.1/16\"\n",
"\n",
"# Create the compute target\n",
"aks_target = ComputeTarget.create(workspace = ws,\n",
" name = \"myaks\",\n",
" provisioning_configuration = config)\n",
"'''"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Enable SSL on the AKS Cluster (optional)\n",
"See code snippet below. Check the documentation [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-secure-web-service) for more details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# provisioning_config = AksCompute.provisioning_configuration(ssl_cert_pem_file=\"cert.pem\", ssl_key_pem_file=\"key.pem\", ssl_cname=\"www.contoso.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -270,8 +345,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Test the web service\n",
"We test the web sevice by passing data."
"# Test the web service using run method\n",
"We test the web sevice by passing data.\n",
"Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
]
},
{
@@ -293,6 +369,57 @@
"print(prediction)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Test the web service using raw HTTP request (optional)\n",
"Alternatively you can construct a raw HTTP request and send it to the service. In this case you need to explicitly pass the HTTP header. This process is shown in the next 2 cells."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# retreive the API keys. AML generates two keys.\n",
"'''\n",
"key1, Key2 = aks_service.get_keys()\n",
"print(key1)\n",
"'''"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# construct raw HTTP request and send to the service\n",
"'''\n",
"%%time\n",
"\n",
"import requests\n",
"\n",
"import json\n",
"\n",
"test_sample = json.dumps({'data': [\n",
" [1,2,3,4,5,6,7,8,9,10], \n",
" [10,9,8,7,6,5,4,3,2,1]\n",
"]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n",
"# Don't forget to add key to the HTTP header.\n",
"headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + key1}\n",
"\n",
"resp = requests.post(aks_service.scoring_uri, test_sample, headers=headers)\n",
"\n",
"\n",
"print(\"prediction:\", resp.text)\n",
"'''"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -317,7 +444,7 @@
"metadata": {
"authors": [
{
"name": "raymondl"
"name": "aashishb"
}
],
"kernelspec": {

View File

@@ -261,6 +261,31 @@
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Use a custom Docker image\n",
"\n",
"You can also specify a custom Docker image to be used as base image if you don't want to use the default base image provided by Azure ML. Please make sure the custom Docker image has Ubuntu >= 16.04, Conda >= 4.5.\\* and Python(3.5.\\* or 3.6.\\*).\n",
"\n",
"Only Supported for `ContainerImage`(from azureml.core.image) with `python` runtime.\n",
"```python\n",
"# use an image available in public Container Registry without authentication\n",
"image_config.base_image = \"mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda\"\n",
"\n",
"# or, use an image available in a private Container Registry\n",
"image_config.base_image = \"myregistry.azurecr.io/mycustomimage:1.0\"\n",
"image_config.base_image_registry.address = \"myregistry.azurecr.io\"\n",
"image_config.base_image_registry.username = \"username\"\n",
"image_config.base_image_registry.password = \"password\"\n",
"\n",
"# or, use an image built during training.\n",
"image_config.base_image = run.properties[\"AzureML.DerivedImageName\"]\n",
"```\n",
"You can get the address of training image from the properties of a Run object. Only new runs submitted with azureml-sdk>=1.0.22 to AMLCompute targets will have the 'AzureML.DerivedImageName' property. Instructions on how to get a Run can be found in [manage-runs](../../training/manage-runs/manage-runs.ipynb). \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -395,7 +420,7 @@
"metadata": {
"authors": [
{
"name": "raymondl"
"name": "aashishb"
}
],
"kernelspec": {

View File

@@ -38,18 +38,19 @@ In this directory, there are two types of notebooks:
* The first type of notebooks will introduce you to core Azure Machine Learning Pipelines features. These notebooks below belong in this category, and are designed to go in sequence; they're all located in the "intro-to-pipelines" folder:
1. [aml-pipelines-getting-started.ipynb](https://aka.ms/pl-get-started)
2. [aml-pipelines-with-data-dependency-steps.ipynb](https://aka.ms/pl-data-dep)
3. [aml-pipelines-publish-and-run-using-rest-endpoint.ipynb](https://aka.ms/pl-pub-rep)
4. [aml-pipelines-data-transfer.ipynb](https://aka.ms/pl-data-trans)
5. [aml-pipelines-use-databricks-as-compute-target.ipynb](https://aka.ms/pl-databricks)
6. [aml-pipelines-use-adla-as-compute-target.ipynb](https://aka.ms/pl-adla)
7. [aml-pipelines-parameter-tuning-with-hyperdrive.ipynb](https://aka.ms/pl-hyperdrive)
8. [aml-pipelines-how-to-use-azurebatch-to-run-a-windows-executable.ipynb](https://aka.ms/pl-azbatch)
9. [aml-pipelines-setup-schedule-for-a-published-pipeline.ipynb](https://aka.ms/pl-schedule)
10. [aml-pipelines-with-automated-machine-learning-step.ipynb](https://aka.ms/pl-automl)
1. [aml-pipelines-getting-started.ipynb](https://aka.ms/pl-get-started): Start with this notebook to understand the concepts of using Azure Machine Learning Pipelines. This notebook will show you how to runs steps in parallel and in sequence.
2. [aml-pipelines-with-data-dependency-steps.ipynb](https://aka.ms/pl-data-dep): This notebooks shows how to connect steps in your pipeline using data. Data produced by one step is used by subsequent steps to force an explicit dependency between steps.
3. [aml-pipelines-publish-and-run-using-rest-endpoint.ipynb](https://aka.ms/pl-pub-rep): Once you are satisfied with your iterative runs in, you could publish your pipeline to get a REST endpoint which could be invoked from non-Pythons clients as well.
4. [aml-pipelines-data-transfer.ipynb](https://aka.ms/pl-data-trans): This notebook shows how you transfer data between supported datastores.
5. [aml-pipelines-use-databricks-as-compute-target.ipynb](https://aka.ms/pl-databricks): This notebooks shows how you can use Pipelines to send your compute payload to Azure Databricks.
6. [aml-pipelines-use-adla-as-compute-target.ipynb](https://aka.ms/pl-adla): This notebook shows how you can use Azure Data Lake Analytics (ADLA) as a compute target.
7. [aml-pipelines-how-to-use-estimatorstep.ipynb](https://aka.ms/pl-estimator): This notebook shows how to use the EstimatorStep.
7. [aml-pipelines-parameter-tuning-with-hyperdrive.ipynb](https://aka.ms/pl-hyperdrive): HyperDriveStep in Pipelines shows how you can do hyper parameter tuning using Pipelines.
8. [aml-pipelines-how-to-use-azurebatch-to-run-a-windows-executable.ipynb](https://aka.ms/pl-azbatch): AzureBatchStep can be used to run your custom code in AzureBatch cluster.
9. [aml-pipelines-setup-schedule-for-a-published-pipeline.ipynb](https://aka.ms/pl-schedule): Once you publish a Pipeline, you can schedule it to trigger based on an interval or on data change in a defined datastore.
10. [aml-pipelines-with-automated-machine-learning-step.ipynb](https://aka.ms/pl-automl): AutoMLStep in Pipelines shows how you can do automated machine learning using Pipelines.
* The second type of notebooks illustrate more sophisticated scenarios, and are independent of each other. These notebooks include:
1. [pipeline-batch-scoring.ipynb](https://aka.ms/pl-batch-score)
1. [pipeline-batch-scoring.ipynb](https://aka.ms/pl-batch-score): This notebook demonstrates how to run a batch scoring job using Azure Machine Learning pipelines.
2. [pipeline-style-transfer.ipynb](https://aka.ms/pl-style-trans)

View File

@@ -141,7 +141,7 @@
" print(\"registered blob datastore with name: %s\" % blob_datastore_name)\n",
"\n",
"# CLI:\n",
"# az ml datastore register-blob -n <datastore-name> -a <account-name> -c <container-name> -k <account-key> [-t <sas-token>]"
"# az ml datastore attach-blob -n <datastore-name> -a <account-name> -c <container-name> -k <account-key> [-t <sas-token>]"
]
},
{

View File

@@ -303,7 +303,7 @@
"\n",
"The following code will create a PythonScriptStep to be executed in the Azure Machine Learning Compute we created above using train.py, one of the files already made available in the project folder.\n",
"\n",
"A **PythonScriptStep** is a basic, built-in step to run a Python Script on a compute target. It takes a script name and optionally other parameters like arguments for the script, compute target, inputs and outputs. If no compute target is specified, default compute target for the workspace is used."
"A **PythonScriptStep** is a basic, built-in step to run a Python Script on a compute target. It takes a script name and optionally other parameters like arguments for the script, compute target, inputs and outputs. If no compute target is specified, default compute target for the workspace is used. You can also use a [**RunConfiguration**](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.runconfiguration?view=azure-ml-py) to specify requirements for the PythonScriptStep, such as conda dependencies and docker image."
]
},
{
@@ -369,10 +369,34 @@
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
"\n",
"# Use a RunConfiguration to specify some additional requirements for this step.\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.runconfig import DEFAULT_CPU_IMAGE\n",
"\n",
"# create a new runconfig object\n",
"run_config = RunConfiguration()\n",
"\n",
"# enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# set Docker base image to the default CPU-based image\n",
"run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE\n",
"\n",
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# auto-prepare the Docker image when used for execution (if it is not already prepared)\n",
"run_config.auto_prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"\n",
"step3 = PythonScriptStep(name=\"extract_step\",\n",
" script_name=\"extract.py\", \n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
" source_directory=project_folder,\n",
" runconfig=run_config)\n",
"\n",
"# list of steps to run\n",
"steps = [step1, step2, step3]\n",

View File

@@ -0,0 +1,281 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to use EstimatorStep in AML Pipeline\n",
"\n",
"This notebook shows how to use the EstimatorStep with Azure Machine Learning Pipelines. Estimator is a convenient object in Azure Machine Learning that wraps run configuration information to help simplify the tasks of specifying how a script is executed.\n",
"\n",
"\n",
"## Prerequisite:\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's get started. First let's import some Python libraries."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"# check core SDK version number\n",
"print(\"Azure ML SDK Version: \", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize workspace\n",
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we could not find the cluster with the given name, then we will create a new cluster here. We will create an `AmlCompute` cluster of `STANDARD_NC6` GPU VMs. This process is broken down into 3 steps:\n",
"1. create the configuration (this step is local and only takes a second)\n",
"2. create the cluster (this step will take about **20 seconds**)\n",
"3. provision the VMs to bring the cluster to the initial size (of 1 in this case). This step will take about **3-5 minutes** and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# choose a name for your cluster\n",
"cluster_name = \"cpucluster\"\n",
"\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target')\n",
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', max_nodes=4)\n",
"\n",
" # create the cluster\n",
" cpu_cluster = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it uses the scale settings for the cluster\n",
" cpu_cluster.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"\n",
"# use get_status() to get a detailed status for the current cluster. \n",
"print(cpu_cluster.get_status().serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that you have created the compute target, let's see what the workspace's `compute_targets` property returns. You should now see one entry named 'cpucluster' of type `AmlCompute`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use a simple script\n",
"We have already created a simple \"hello world\" script. This is the script that we will submit through the estimator pattern. It prints a hello-world message, and if Azure ML SDK is installed, it will also logs an array of values ([Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number))."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build an Estimator object\n",
"Estimator by default will attempt to use Docker-based execution. You can also enable Docker and let estimator pick the default CPU image supplied by Azure ML for execution. You can target an AmlCompute cluster (or any other supported compute target types). You can also customize the conda environment by adding conda and/or pip packages.\n",
"\n",
"> Note: The arguments to the entry script used in the Estimator object should be specified as *list* using\n",
" 'estimator_entry_script_arguments' parameter when instantiating EstimatorStep. Estimator object's parameter\n",
" 'script_params' accepts a dictionary. However 'estimator_entry_script_arguments' parameter expects arguments as\n",
" a list.\n",
"\n",
"> Estimator object initialization involves specifying a list of DataReference objects in its 'inputs' parameter.\n",
" In Pipelines, a step can take another step's output or DataReferences as input. So when creating an EstimatorStep,\n",
" the parameters 'inputs' and 'outputs' need to be set explicitly and that will override 'inputs' parameter\n",
" specified in the Estimator object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Datastore\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.core import PipelineData\n",
"\n",
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
"\n",
"input_data = DataReference(\n",
" datastore=def_blob_store,\n",
" data_reference_name=\"input_data\",\n",
" path_on_datastore=\"20newsgroups/20news.pkl\")\n",
"\n",
"output = PipelineData(\"output\", datastore=def_blob_store)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.estimator import Estimator\n",
"\n",
"est = Estimator(source_directory='.', \n",
" compute_target=cpu_cluster, \n",
" entry_script='dummy_train.py', \n",
" conda_packages=['scikit-learn'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an EstimatorStep\n",
"[EstimatorStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.estimator_step.estimatorstep?view=azure-ml-py) adds a step to run Estimator in a Pipeline.\n",
"\n",
"- **name:** Name of the step\n",
"- **estimator:** Estimator object\n",
"- **estimator_entry_script_arguments:** \n",
"- **runconfig_pipeline_params:** Override runconfig properties at runtime using key-value pairs each with name of the runconfig property and PipelineParameter for that property\n",
"- **inputs:** Inputs\n",
"- **outputs:** Output is list of PipelineData\n",
"- **compute_target:** Compute target to use \n",
"- **allow_reuse:** Whether the step should reuse previous results when run with the same settings/inputs. If this is false, a new run will always be generated for this step during pipeline execution.\n",
"- **version:** Optional version tag to denote a change in functionality for the step"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.steps import EstimatorStep\n",
"\n",
"est_step = EstimatorStep(name=\"Estimator_Train\", \n",
" estimator=est, \n",
" estimator_entry_script_arguments=[\"--datadir\", input_data, \"--output\", output],\n",
" runconfig_pipeline_params=None, \n",
" inputs=[input_data], \n",
" outputs=[output], \n",
" compute_target=cpu_cluster)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build and Submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline\n",
"from azureml.core import Experiment\n",
"pipeline = Pipeline(workspace=ws, steps=[est_step])\n",
"pipeline_run = Experiment(ws, 'Estimator_sample').submit(pipeline)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
}
],
"metadata": {
"authors": [
{
"name": "sanpil"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -36,7 +36,7 @@
"from azureml.exceptions import ComputeTargetException\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.steps import HyperDriveStep\n",
"from azureml.pipeline.core import Pipeline\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.train.dnn import TensorFlow\n",
"from azureml.train.hyperdrive import *\n",
"\n",
@@ -310,11 +310,17 @@
"metadata": {},
"outputs": [],
"source": [
"metrics_output_name = 'metrics_output'\n",
"metirics_data = PipelineData(name='metrics_data',\n",
" datastore=ds,\n",
" pipeline_output_name=metrics_output_name)\n",
"\n",
"hd_step = HyperDriveStep(\n",
" name=\"hyperdrive_module\",\n",
" hyperdrive_run_config=hd_config,\n",
" estimator_entry_script_arguments=['--data-folder', data_folder],\n",
" inputs=[data_folder])"
" inputs=[data_folder],\n",
" metrics_output=metirics_data)"
]
},
{
@@ -366,6 +372,40 @@
"source": [
"pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the metrics\n",
"Outputs of above run can be used as inputs of other steps in pipeline. In this tutorial, we will show the result metrics."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metrics_output = pipeline_run.get_pipeline_output(metrics_output_name)\n",
"num_file_downloaded = metrics_output.download('.', show_progress=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import json\n",
"with open(metrics_output._path_on_datastore) as f: \n",
" metrics_output_result = f.read()\n",
" \n",
"deserialized_metrics_output = json.loads(metrics_output_result)\n",
"df = pd.DataFrame(deserialized_metrics_output)\n",
"df"
]
}
],
"metadata": {

View File

@@ -33,7 +33,7 @@
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace, Datastore\n",
"from azureml.core import Workspace, Datastore, Experiment\n",
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"\n",
@@ -55,10 +55,7 @@
"print(\"Default datastore's name: {}\".format(def_file_store.name))\n",
"\n",
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
"print(\"Blobstore's name: {}\".format(def_blob_store.name))\n",
"\n",
"# project folder\n",
"project_folder = '.'"
"print(\"Blobstore's name: {}\".format(def_blob_store.name))"
]
},
{
@@ -160,7 +157,7 @@
" inputs=[blob_input_data],\n",
" outputs=[processed_data1],\n",
" compute_target=aml_compute, \n",
" source_directory=project_folder\n",
" source_directory='.'\n",
")\n",
"print(\"trainStep created\")"
]
@@ -191,7 +188,7 @@
" inputs=[processed_data1],\n",
" outputs=[processed_data2],\n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
" source_directory='.')\n",
"print(\"extractStep created\")"
]
},
@@ -252,7 +249,7 @@
" inputs=[processed_data1, processed_data2],\n",
" outputs=[processed_data3], \n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
" source_directory='.')\n",
"print(\"compareStep created\")"
]
},
@@ -270,10 +267,7 @@
"outputs": [],
"source": [
"pipeline1 = Pipeline(workspace=ws, steps=[compareStep])\n",
"print (\"Pipeline is built\")\n",
"\n",
"pipeline1.validate()\n",
"print(\"Simple validation complete\") "
"print (\"Pipeline is built\")"
]
},
{
@@ -290,10 +284,38 @@
"metadata": {},
"outputs": [],
"source": [
"published_pipeline1 = pipeline1.publish(name=\"My_New_Pipeline\", description=\"My Published Pipeline Description\")\n",
"published_pipeline1 = pipeline1.publish(name=\"My_New_Pipeline\", description=\"My Published Pipeline Description\", continue_on_step_failure=True)\n",
"published_pipeline1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note: the continue_on_step_failure parameter specifies whether the execution of steps in the Pipeline will continue if one step fails. The default value is False, meaning when one step fails, the Pipeline execution will stop, canceling any running steps."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Publish the pipeline from a submitted PipelineRun\n",
"It is also possible to publish a pipeline from a submitted PipelineRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# submit a pipeline run\n",
"pipeline_run1 = Experiment(ws, 'Pipeline_experiment').submit(pipeline1)\n",
"# publish a pipeline from the submitted pipeline run\n",
"published_pipeline2 = pipeline_run1.publish_pipeline(name=\"My_New_Pipeline2\", description=\"My Published Pipeline Description\", version=\"0.1\", continue_on_step_failure=True)\n",
"published_pipeline2"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -325,7 +347,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run published pipeline using its REST endpoint"
"### Run published pipeline using its REST endpoint\n",
"[This notebook](https://aka.ms/pl-restep-auth) shows how to authenticate to AML workspace."
]
},
{

View File

@@ -107,15 +107,11 @@
"source": [
"from azureml.pipeline.steps import PythonScriptStep\n",
"\n",
"\n",
"# project folder\n",
"project_folder = 'scripts'\n",
"\n",
"trainStep = PythonScriptStep(\n",
" name=\"Training_Step\",\n",
" script_name=\"train.py\", \n",
" compute_target=aml_compute_target, \n",
" source_directory=project_folder\n",
" source_directory='.'\n",
")\n",
"print(\"TrainStep created\")"
]
@@ -136,9 +132,7 @@
"from azureml.pipeline.core import Pipeline\n",
"\n",
"pipeline1 = Pipeline(workspace=ws, steps=[trainStep])\n",
"print (\"Pipeline is built\")\n",
"\n",
"pipeline1.validate()"
"print (\"Pipeline is built\")"
]
},
{
@@ -255,11 +249,12 @@
"schedules = Schedule.get_all(ws, pipeline_id=pub_pipeline_id)\n",
"\n",
"# We will iterate through the list of schedules and \n",
"# use the last ID in the list for further operations: \n",
"# use the last recurrence schedule in the list for further operations: \n",
"print(\"Found these schedules for the pipeline id {}:\".format(pub_pipeline_id))\n",
"for schedule in schedules: \n",
" print(schedule.id)\n",
" schedule_id = schedule.id\n",
" if schedule.recurrence is not None:\n",
" schedule_id = schedule.id\n",
"\n",
"print(\"Schedule id to be used for schedule operations: {}\".format(schedule_id))"
]
@@ -380,7 +375,8 @@
"metadata": {},
"source": [
"### Create a schedule for the pipeline using a Datastore\n",
"This schedule will run when additions or modifications are made to Blobs in the Datastore container.\n",
"This schedule will run when additions or modifications are made to Blobs in the Datastore.\n",
"By default, the Datastore container is monitored for changes. Use the path_on_datastore parameter to instead specify a path on the Datastore to monitor for changes. Changes made to subfolders in the container/path will not trigger the schedule.\n",
"Note: Only Blob Datastores are supported."
]
},
@@ -400,6 +396,7 @@
" datastore=datastore,\n",
" wait_for_provisioning=True,\n",
" description=\"Schedule Run\")\n",
" #path_on_datastore=\"file/path\") use path_on_datastore to specify a specific folder to monitor for changes.\n",
"\n",
"# You may want to make sure that the schedule is provisioned properly\n",
"# before making any further changes to the schedule\n",

View File

@@ -168,7 +168,7 @@
"metadata": {},
"source": [
"## Data Connections with Inputs and Outputs\n",
"The DatabricksStep supports Azure Bloband ADLS for inputs and outputs. You also will need to define a [Secrets](https://docs.azuredatabricks.net/user-guide/secrets/index.html) scope to enable authentication to external data sources such as Blob and ADLS from Databricks.\n",
"The DatabricksStep supports Azure Blob and ADLS for inputs and outputs. You also will need to define a [Secrets](https://docs.azuredatabricks.net/user-guide/secrets/index.html) scope to enable authentication to external data sources such as Blob and ADLS from Databricks.\n",
"\n",
"- Databricks documentation on [Azure Blob](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html)\n",
"- Databricks documentation on [ADLS](https://docs.databricks.com/spark/latest/data-sources/azure/azure-datalake.html)\n",

View File

@@ -0,0 +1,517 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure Machine Learning Pipeline with AutoMLStep\n",
"This notebook demonstrates the use of AutoMLStep in Azure Machine Learning Pipeline."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Create or Attach existing AmlCompute to a workspace.\n",
"3. Configure AutoML using `AutoMLConfig`.\n",
"4. Use AutoMLStep\n",
"5. Train the model using AmlCompute\n",
"6. Explore the results.\n",
"7. Test the best fitted model.\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Parallel** executions for iterations\n",
"- **Asynchronous** tracking of progress\n",
"- Retrieving models for any iteration or logged metric\n",
"- Specifying AutoML settings as `**kwargs`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Azure Machine Learning and Pipeline SDK-specific imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import csv\n",
"\n",
"from matplotlib import pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"from azureml.train.automl import AutoMLStep\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Azure ML experiment\n",
"Let's create an experiment named \"automl-classification\" and a folder to hold the training scripts. The script runs will be recorded under the experiment in Azure.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = 'automlstep-classification'\n",
"project_folder = './project'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"experiment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
"\n",
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Choose a name for your cluster.\n",
"amlcompute_cluster_name = \"cpucluster\"\n",
"\n",
"found = False\n",
"# Check if this compute target already exists in the workspace.\n",
"cts = ws.compute_targets\n",
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
" found = True\n",
" print('Found existing compute target.')\n",
" compute_target = cts[amlcompute_cluster_name]\n",
" \n",
"if not found:\n",
" print('Creating a new compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
" #vm_priority = 'lowpriority', # optional\n",
" max_nodes = 4)\n",
"\n",
" # Create the cluster.\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
" \n",
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
" compute_target.wait_for_completion(show_output = True, min_node_count = 1, timeout_in_minutes = 10)\n",
" \n",
" # For a more detailed view of current AmlCompute status, use get_status()."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare and Point to Data\n",
"For remote executions, you need to make the data accessible from the remote compute.\n",
"This can be done by uploading the data to DataStore.\n",
"In this example, we upload scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_train = datasets.load_digits()\n",
"\n",
"if not os.path.isdir('data'):\n",
" os.mkdir('data')\n",
" \n",
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)\n",
" \n",
"pd.DataFrame(data_train.data).to_csv(\"data/X_train.tsv\", index=False, header=False, quoting=csv.QUOTE_ALL, sep=\"\\t\")\n",
"pd.DataFrame(data_train.target).to_csv(\"data/y_train.tsv\", index=False, header=False, sep=\"\\t\")\n",
"\n",
"ds = ws.get_default_datastore()\n",
"ds.upload(src_dir='./data', target_path='bai_data', overwrite=True, show_progress=True)\n",
"\n",
"from azureml.data.data_reference import DataReference \n",
"input_data = DataReference(datastore=ds, \n",
" data_reference_name=\"input_data_reference\",\n",
" path_on_datastore='bai_data',\n",
" mode='download',\n",
" path_on_compute='/tmp/azureml_runs',\n",
" overwrite=False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to AmlCompute\n",
"#conda_run_config.target = compute_target\n",
"\n",
"conda_run_config.environment.docker.enabled = True\n",
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], \n",
" conda_packages=['numpy', 'py-xgboost'], \n",
" pin_sdk_version=False)\n",
"conda_run_config.environment.python.conda_dependencies = cd\n",
"\n",
"print('run config is ready')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"import pandas as pd\n",
"\n",
"def get_data():\n",
" X_train = pd.read_csv(\"/tmp/azureml_runs/bai_data/X_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
" y_train = pd.read_csv(\"/tmp/azureml_runs/bai_data/y_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
"\n",
" return { \"X\" : X_train.values, \"y\" : y_train[0].values }\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up AutoMLConfig for Training\n",
"\n",
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
"\n",
"**Note:** When using AmlCompute, you can't pass Numpy arrays directly to the fit method.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"iteration_timeout_minutes\": 5,\n",
" \"iterations\": 20,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": False,\n",
" \"max_concurrent_iterations\": 3,\n",
" \"verbosity\": logging.INFO\n",
"}\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder,\n",
" compute_target=compute_target,\n",
" run_configuration=conda_run_config,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.\n",
"In this example, we specify `show_output = False` to suppress console output while the run is in progress."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define AutoMLStep"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import PipelineData, TrainingOutput\n",
"\n",
"metrics_output_name = 'metrics_output'\n",
"best_model_output_name = 'best_model_output'\n",
"\n",
"metirics_data = PipelineData(name='metrics_data',\n",
" datastore=ds,\n",
" pipeline_output_name=metrics_output_name,\n",
" training_output=TrainingOutput(type='Metrics'))\n",
"model_data = PipelineData(name='model_data',\n",
" datastore=ds,\n",
" pipeline_output_name=best_model_output_name,\n",
" training_output=TrainingOutput(type='Model'))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_step = AutoMLStep(\n",
" name='automl_module',\n",
" experiment=experiment,\n",
" automl_config=automl_config,\n",
" inputs=[input_data],\n",
" outputs=[metirics_data, model_data],\n",
" allow_reuse=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import Pipeline\n",
"pipeline = Pipeline(\n",
" description=\"pipeline_with_automlstep\",\n",
" workspace=ws, \n",
" steps=[automl_step])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_run = experiment.submit(pipeline)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examine Results\n",
"\n",
"### Retrieve the metrics of all child runs\n",
"Outputs of above run can be used as inputs of other steps in pipeline. In this tutorial, we will examine the outputs by retrieve output data and running some tests."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"metrics_output = pipeline_run.get_pipeline_output(metrics_output_name)\n",
"num_file_downloaded = metrics_output.download('.', show_progress=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"with open(metrics_output._path_on_datastore) as f: \n",
" metrics_output_result = f.read()\n",
" \n",
"deserialized_metrics_output = json.loads(metrics_output_result)\n",
"df = pd.DataFrame(deserialized_metrics_output)\n",
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_model_output = pipeline_run.get_pipeline_output(best_model_output_name)\n",
"num_file_downloaded = best_model_output.download('.', show_progress=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
" import pickle\n",
"\n",
" with open(best_model_output._path_on_datastore, \"rb\" ) as f:\n",
" best_model = pickle.load(f)\n",
" best_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Model\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Best Model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 3, replace = False):\n",
" print(index)\n",
" predicted = best_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize=(3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "sanpil"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -83,10 +83,10 @@
"metadata": {},
"outputs": [],
"source": [
"# project folder\n",
"project_folder = '.'\n",
"# source directory\n",
"source_directory = '.'\n",
" \n",
"print('Sample projects will be created in {}.'.format(project_folder))"
"print('Sample scripts will be created in {} directory.'.format(source_directory))"
]
},
{
@@ -259,6 +259,44 @@
"**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Specify conda dependencies and a base docker image through a RunConfiguration\n",
"\n",
"This step uses a docker image and scikit-learn, use a [**RunConfiguration**](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.runconfiguration?view=azure-ml-py) to specify these requirements and use when creating the PythonScriptStep. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.runconfig import DEFAULT_CPU_IMAGE\n",
"\n",
"# create a new runconfig object\n",
"run_config = RunConfiguration()\n",
"\n",
"# enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# set Docker base image to the default CPU-based image\n",
"run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE\n",
"\n",
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# auto-prepare the Docker image when used for execution (if it is not already prepared)\n",
"run_config.auto_prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -273,7 +311,8 @@
" inputs=[blob_input_data],\n",
" outputs=[processed_data1],\n",
" compute_target=aml_compute, \n",
" source_directory=project_folder\n",
" source_directory=source_directory,\n",
" runconfig=run_config\n",
")\n",
"print(\"trainStep created\")"
]
@@ -304,7 +343,7 @@
" inputs=[processed_data1],\n",
" outputs=[processed_data2],\n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
" source_directory=source_directory)\n",
"print(\"extractStep created\")"
]
},
@@ -312,8 +351,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Define a Step that consumes multiple intermediate data and produces intermediate data\n",
"In this step, we define a step that consumes multiple intermediate data and produces intermediate data.\n",
"#### Define a Step that consumes intermediate data and existing data and produces intermediate data\n",
"In this step, we define a step that consumes multiple data types and produces intermediate data.\n",
"\n",
"This step uses the output generated from the previous step as well as existing data on a DataStore. The location of the existing data is specified using a [**PipelineParameter**](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelineparameter?view=azure-ml-py) and a [**DataPath**](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.datapath.datapath?view=azure-ml-py). Using a PipelineParameter enables easy modification of the data location when the Pipeline is published and resubmitted.\n",
"\n",
"**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**"
]
@@ -324,16 +365,31 @@
"metadata": {},
"outputs": [],
"source": [
"# Now define step6 that takes two inputs (both intermediate data), and produce an output\n",
"# Reference the data uploaded to blob storage using a PipelineParameter and a DataPath\n",
"from azureml.pipeline.core import PipelineParameter\n",
"from azureml.data.datapath import DataPath, DataPathComputeBinding\n",
"\n",
"datapath = DataPath(datastore=def_blob_store, path_on_datastore='20newsgroups/20news.pkl')\n",
"datapath_param = PipelineParameter(name=\"compare_data\", default_value=datapath)\n",
"data_parameter1 = (datapath_param, DataPathComputeBinding(mode='mount'))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Now define the compare step which takes two inputs and produces an output\n",
"processed_data3 = PipelineData(\"processed_data3\", datastore=def_blob_store)\n",
"\n",
"compareStep = PythonScriptStep(\n",
" script_name=\"compare.py\",\n",
" arguments=[\"--compare_data1\", processed_data1, \"--compare_data2\", processed_data2, \"--output_compare\", processed_data3],\n",
" inputs=[processed_data1, processed_data2],\n",
" arguments=[\"--compare_data1\", data_parameter1, \"--compare_data2\", processed_data2, \"--output_compare\", processed_data3],\n",
" inputs=[data_parameter1, processed_data2],\n",
" outputs=[processed_data3], \n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
" source_directory=source_directory)\n",
"print(\"compareStep created\")"
]
},
@@ -351,10 +407,7 @@
"outputs": [],
"source": [
"pipeline1 = Pipeline(workspace=ws, steps=[compareStep])\n",
"print (\"Pipeline is built\")\n",
"\n",
"pipeline1.validate()\n",
"print(\"Simple validation complete\") "
"print (\"Pipeline is built\")"
]
},
{

View File

@@ -0,0 +1,30 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import argparse
import os
print("*********************************************************")
print("Hello Azure ML!")
parser = argparse.ArgumentParser()
parser.add_argument('--datadir', type=str, help="data directory")
parser.add_argument('--output', type=str, help="output")
args = parser.parse_args()
print("Argument 1: %s" % args.datadir)
print("Argument 2: %s" % args.output)
if not (args.output is None):
os.makedirs(args.output, exist_ok=True)
print("%s created" % args.output)
try:
from azureml.core import Run
run = Run.get_context()
print("Log Fibonacci numbers.")
run.log_list('Fibonacci numbers', [0, 1, 1, 2, 3, 5, 8, 13, 21, 34])
run.complete()
except:
print("Warning: you need to install Azure ML SDK in order to log metrics.")
print("*********************************************************")

View File

@@ -508,7 +508,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get AAD token"
"### Get AAD token\n",
"[This notebook](https://aka.ms/pl-restep-auth) shows how to authenticate to AML workspace."
]
},
{

View File

@@ -492,7 +492,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get AAD token"
"## Get AAD token\n",
"[This notebook](https://aka.ms/pl-restep-auth) shows how to authenticate to AML workspace."
]
},
{

View File

@@ -1,253 +1,253 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License.\n",
"\n",
"## Authentication in Azure Machine Learning\n",
"\n",
"This notebook shows you how to authenticate to your Azure ML Workspace using\n",
"\n",
" 1. Interactive Login Authentication\n",
" 2. Azure CLI Authentication\n",
" 3. Service Principal Authentication\n",
" \n",
"The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The Service Principal authentication is suitable for automated workflows, for example as part of Azure Devops build."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Interactive Authentication\n",
"\n",
"Interactive authentication is the default mode when using Azure ML SDK.\n",
"\n",
"When you connect to your workspace using workspace.from_config, you will get an interactive login dialog."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Also, if you explicitly specify the subscription ID, resource group and resource group, you will get the dialog."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error\n",
"\n",
"```\n",
"AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...\n",
"```\n",
"\n",
"check that the you used correct login and entered the correct subscription ID."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```\n",
"\n",
"In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with tenant ID as argument ([see instructions how to obtain tenant Id](#get-tenant-id))."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
"\n",
"interactive_auth = InteractiveLoginAuthentication(tenant_id=\"my-tenant-id\")\n",
"\n",
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=interactive_auth)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Azure CLI Authentication\n",
"\n",
"If you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.\n",
"\n",
"Note that interactive authentication described above won't use existing Azure CLI auth tokens. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.authentication import AzureCliAuthentication\n",
"\n",
"cli_auth = AzureCliAuthentication()\n",
"\n",
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=cli_auth)\n",
"\n",
"print(\"Found workspace {} at location {}\".format(ws.name, ws.location))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Service Principal Authentication\n",
"\n",
"When setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.\n",
"\n",
"Note that you must have administrator privileges over the Azure subscription to complete these steps.\n",
"\n",
"The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application registration**, give your service principal a name, for example _my-svc-principal_. You can leave application type as is, and specify a dummy value for Sign-on URL, such as _https://invalid_.\n",
"\n",
"Then click **Create**.\n",
"\n",
"![service principal creation]<img src=\"images/svc-pr-1.PNG\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The next step is to obtain the _Application ID_ (also called username) and create _password_ for the service principal.\n",
"\n",
"From the page for your newly created service principal, copy the _Application ID_. Then select **Settings** and **Keys**, write a description for your key, and select duration. Then click **Save**, and copy the _password_ to a secure location.\n",
"\n",
"![application id and password](images/svc-pr-2.PNG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id =\"get-tenant-id\"></a>\n",
"\n",
"Also, you need to obtain the tenant ID of your Azure subscription. Go back to **Azure Active Directory**, select **Properties** and copy _Directory ID_.\n",
"\n",
"![tenant id](images/svc-pr-3.PNG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. \n",
"\n",
"Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.\n",
"\n",
"![add role](images/svc-pr-4.PNG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.\n",
"\n",
"**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.\n",
"\n",
"```\n",
"$env:AZUREML_PASSWORD = \"my-password\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from azureml.core.authentication import ServicePrincipalAuthentication\n",
"\n",
"svc_pr_password = os.environ.get(\"AZUREML_PASSWORD\")\n",
"\n",
"svc_pr = ServicePrincipalAuthentication(\n",
" tenant_id=\"my-tenant-id\",\n",
" service_principal_id=\"my-application-id\",\n",
" service_principal_password=svc_pr_password)\n",
"\n",
"\n",
"ws = Workspace(\n",
" subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=svc_pr\n",
" )\n",
"\n",
"print(\"Found workspace {} at location {}\".format(ws.name, ws.location))"
]
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License.\n",
"\n",
"## Authentication in Azure Machine Learning\n",
"\n",
"This notebook shows you how to authenticate to your Azure ML Workspace using\n",
"\n",
" 1. Interactive Login Authentication\n",
" 2. Azure CLI Authentication\n",
" 3. Service Principal Authentication\n",
" \n",
"The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The Service Principal authentication is suitable for automated workflows, for example as part of Azure Devops build."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Interactive Authentication\n",
"\n",
"Interactive authentication is the default mode when using Azure ML SDK.\n",
"\n",
"When you connect to your workspace using workspace.from_config, you will get an interactive login dialog."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Also, if you explicitly specify the subscription ID, resource group and resource group, you will get the dialog."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error\n",
"\n",
"```\n",
"AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...\n",
"```\n",
"\n",
"check that the you used correct login and entered the correct subscription ID."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```\n",
"\n",
"In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with tenant ID as argument ([see instructions how to obtain tenant Id](#get-tenant-id))."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
"\n",
"interactive_auth = InteractiveLoginAuthentication(tenant_id=\"my-tenant-id\")\n",
"\n",
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=interactive_auth)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Azure CLI Authentication\n",
"\n",
"If you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.\n",
"\n",
"Note that interactive authentication described above won't use existing Azure CLI auth tokens. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.authentication import AzureCliAuthentication\n",
"\n",
"cli_auth = AzureCliAuthentication()\n",
"\n",
"ws = Workspace(subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=cli_auth)\n",
"\n",
"print(\"Found workspace {} at location {}\".format(ws.name, ws.location))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Service Principal Authentication\n",
"\n",
"When setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.\n",
"\n",
"Note that you must have administrator privileges over the Azure subscription to complete these steps.\n",
"\n",
"The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application registration**, give your service principal a name, for example _my-svc-principal_. You can leave application type as is, and specify a dummy value for Sign-on URL, such as _https://invalid_.\n",
"\n",
"Then click **Create**.\n",
"\n",
"![service principal creation]<img src=\"images/svc-pr-1.PNG\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The next step is to obtain the _Application ID_ (also called username) and create _password_ for the service principal.\n",
"\n",
"From the page for your newly created service principal, copy the _Application ID_. Then select **Settings** and **Keys**, write a description for your key, and select duration. Then click **Save**, and copy the _password_ to a secure location.\n",
"\n",
"![application id and password](images/svc-pr-2.PNG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id =\"get-tenant-id\"></a>\n",
"\n",
"Also, you need to obtain the tenant ID of your Azure subscription. Go back to **Azure Active Directory**, select **Properties** and copy _Directory ID_.\n",
"\n",
"![tenant id](images/svc-pr-3.PNG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. \n",
"\n",
"Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.\n",
"\n",
"![add role](images/svc-pr-4.PNG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.\n",
"\n",
"**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.\n",
"\n",
"```\n",
"$env:AZUREML_PASSWORD = \"my-password\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from azureml.core.authentication import ServicePrincipalAuthentication\n",
"\n",
"svc_pr_password = os.environ.get(\"AZUREML_PASSWORD\")\n",
"\n",
"svc_pr = ServicePrincipalAuthentication(\n",
" tenant_id=\"my-tenant-id\",\n",
" service_principal_id=\"my-application-id\",\n",
" service_principal_password=svc_pr_password)\n",
"\n",
"\n",
"ws = Workspace(\n",
" subscription_id=\"my-subscription-id\",\n",
" resource_group=\"my-ml-rg\",\n",
" workspace_name=\"my-ml-workspace\",\n",
" auth=svc_pr\n",
" )\n",
"\n",
"print(\"Found workspace {} at location {}\".format(ws.name, ws.location))"
]
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -220,14 +220,14 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import MpiConfiguration\n",
"from azureml.train.dnn import Chainer\n",
"\n",
"estimator = Chainer(source_directory=project_folder,\n",
" compute_target=compute_target,\n",
" entry_script='train_mnist.py',\n",
" node_count=2,\n",
" process_count_per_node=1,\n",
" distributed_backend='mpi',\n",
" distributed_training=MpiConfiguration(),\n",
" use_gpu=True)"
]
},

View File

@@ -233,14 +233,14 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import MpiConfiguration\n",
"from azureml.train.dnn import PyTorch\n",
"\n",
"estimator = PyTorch(source_directory=project_folder,\n",
" compute_target=compute_target,\n",
" entry_script='pytorch_horovod_mnist.py',\n",
" node_count=2,\n",
" process_count_per_node=1,\n",
" distributed_backend='mpi',\n",
" distributed_training=MpiConfiguration(),\n",
" use_gpu=True)"
]
},

View File

@@ -285,7 +285,9 @@
"metadata": {},
"source": [
"### Create a TensorFlow estimator\n",
"The AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow)."
"The AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-tensorflow).\n",
"\n",
"The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release."
]
},
{
@@ -294,6 +296,7 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import MpiConfiguration\n",
"from azureml.train.dnn import TensorFlow\n",
"\n",
"script_params={\n",
@@ -305,9 +308,8 @@
" script_params=script_params,\n",
" entry_script='tf_horovod_word2vec.py',\n",
" node_count=2,\n",
" process_count_per_node=1,\n",
" distributed_backend='mpi',\n",
" use_gpu=True)"
" distributed_training=MpiConfiguration(),\n",
" framework_version='1.12')"
]
},
{

View File

@@ -26,7 +26,7 @@
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)\n",
"* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK"
"* Review the [tutorial](../train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) on single-node TensorFlow training using the SDK"
]
},
{
@@ -208,6 +208,7 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import TensorflowConfiguration\n",
"from azureml.train.dnn import TensorFlow\n",
"\n",
"script_params={\n",
@@ -215,14 +216,15 @@
" '--train_steps': 500\n",
"}\n",
"\n",
"distributed_training = TensorflowConfiguration()\n",
"distributed_training.worker_count = 2\n",
"\n",
"estimator = TensorFlow(source_directory=project_folder,\n",
" compute_target=compute_target,\n",
" script_params=script_params,\n",
" entry_script='tf_mnist_replica.py',\n",
" node_count=2,\n",
" worker_count=2,\n",
" parameter_server_count=1, \n",
" distributed_backend='ps',\n",
" distributed_training=distributed_training,\n",
" use_gpu=True)"
]
},

View File

@@ -291,7 +291,7 @@
"outputs": [],
"source": [
"# use a custom Docker image\n",
"from azureml.core.runconfig import ContainerRegistry\n",
"from azureml.core.container_registry import ContainerRegistry\n",
"\n",
"# this is an image available in Docker Hub\n",
"image_name = 'continuumio/miniconda3'\n",
@@ -309,7 +309,8 @@
"est = Estimator(source_directory='.', compute_target='local', \n",
" entry_script='dummy_train.py',\n",
" custom_docker_image=image_name,\n",
" image_registry_details=image_registry_details,\n",
" # uncomment below line to use your private ACR\n",
" #image_registry_details=image_registry_details,\n",
" user_managed=user_managed_dependencies\n",
" )\n",
"\n",
@@ -336,7 +337,7 @@
"metadata": {
"authors": [
{
"name": "minxia"
"name": "maxluk"
}
],
"kernelspec": {
@@ -356,7 +357,7 @@
"pygments_lexer": "ipython3",
"version": "3.6.8"
},
"msauthor": "haining"
"msauthor": "minxia"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -396,7 +396,7 @@
"est = TensorFlow(source_directory=script_folder,\n",
" script_params=script_params,\n",
" compute_target=compute_target, \n",
" conda_packages=['keras', 'matplotlib'],\n",
" pip_packages=['keras', 'matplotlib'],\n",
" entry_script='keras_mnist.py', \n",
" use_gpu=True)"
]
@@ -792,7 +792,7 @@
"outputs": [],
"source": [
"best_run = hdr.get_best_run_by_primary_metric()\n",
"print(best_run.get_details()['runDefinition']['Arguments'])"
"print(best_run.get_details()['runDefinition']['arguments'])"
]
},
{
@@ -1144,7 +1144,7 @@
"metadata": {
"authors": [
{
"name": "haining"
"name": "maxluk"
}
],
"kernelspec": {
@@ -1164,7 +1164,7 @@
"pygments_lexer": "ipython3",
"version": "3.6.7"
},
"msauthor": "haining"
"msauthor": "maxluk"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -396,7 +396,10 @@
"source": [
"## Create TensorFlow estimator\n",
"Next, we construct an `azureml.train.dnn.TensorFlow` estimator object, use the Batch AI cluster as compute target, and pass the mount-point of the datastore to the training code as a parameter.\n",
"The TensorFlow estimator is providing a simple way of launching a TensorFlow training job on a compute target. It will automatically provide a docker image that has TensorFlow installed -- if additional pip or conda packages are required, their names can be passed in via the `pip_packages` and `conda_packages` arguments and they will be included in the resulting docker."
"\n",
"The TensorFlow estimator is providing a simple way of launching a TensorFlow training job on a compute target. It will automatically provide a docker image that has TensorFlow installed -- if additional pip or conda packages are required, their names can be passed in via the `pip_packages` and `conda_packages` arguments and they will be included in the resulting docker.\n",
"\n",
"The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release."
]
},
{
@@ -419,7 +422,8 @@
" script_params=script_params,\n",
" compute_target=compute_target,\n",
" entry_script='tf_mnist.py', \n",
" use_gpu=True)"
" use_gpu=True, \n",
" framework_version='1.12')"
]
},
{
@@ -1158,7 +1162,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
"version": "3.6.6"
},
"msauthor": "minxia"
},

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -488,7 +488,7 @@
"metadata": {
"authors": [
{
"name": "haining"
"name": "roastala"
}
],
"kernelspec": {

View File

@@ -615,7 +615,7 @@
"metadata": {
"authors": [
{
"name": "haining"
"name": "roastala"
}
],
"kernelspec": {

View File

@@ -673,7 +673,7 @@
"metadata": {
"authors": [
{
"name": "haining"
"name": "roastala"
}
],
"kernelspec": {

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

52
index.html Normal file
View File

@@ -0,0 +1,52 @@
<!DOCTYPE html>
<html>
<head>
<meta name="google-site-verification" content="fkZxAt5AEHiB_Wom2R_25VTmNyj19J8lZlfTREsaEN4" />
<title>Azure Machine Learning</title>
</head>
<body>
<h1 id="azure-machine-learning-service-example-notebooks">Azure Machine Learning service example notebooks</h1>
<p>This repository contains example notebooks demonstrating the <a href="https://azure.microsoft.com/en-us/services/machine-learning-service/">Azure Machine Learning</a> Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.</p>
<div class="figure">
<img src="https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/service/media/overview-what-is-azure-ml/aml.png" alt="Azure ML workflow" /><p class="caption">Azure ML workflow</p>
</div>
<h2 id="quick-installation">Quick installation</h2>
<pre class="sh"><code>pip install azureml-sdk</code></pre>
<p>Read more detailed instructions on <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/NBSETUP.md">how to set up your environment</a> using Azure Notebook service, your own Jupyter notebook server, or Docker.</p>
<h2 id="how-to-navigate-and-use-the-example-notebooks">How to navigate and use the example notebooks?</h2>
<p>You should always run the <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb">Configuration</a> notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.</p>
<p>If you want to...</p>
<ul>
<li>...try out and explore Azure ML, start with image classification tutorials: <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/img-classification-part1-training.ipynb">Part 1 (Training)</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/img-classification-part2-deploy.ipynb">Part 2 (Deployment)</a>.</li>
<li>...prepare your data and do automated machine learning, start with regression tutorials: <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/regression-part1-data-prep.ipynb">Part 1 (Data Prep)</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/regression-part2-automated-ml.ipynb">Part 2 (Automated ML)</a>.</li>
<li>...learn about experimentation and tracking run history, first <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb">train within Notebook</a>, then try <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb">training on remote VM</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/logging-api/logging-api.ipynb">using logging APIs</a>.</li>
<li>...train deep learning models at scale, first learn about <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb">Machine Learning Compute</a>, and then try <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb">distributed hyperparameter tuning</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb">distributed training</a>.</li>
<li>...deploy models as a realtime scoring service, first learn the basics by <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb">training within Notebook and deploying to Azure Container Instance</a>, then learn how to <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb">register and manage models, and create Docker images</a>, and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb">production deploy models on Azure Kubernetes Cluster</a>.</li>
<li>...deploy models as a batch scoring service, first <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb">train a model within Notebook</a>, learn how to <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb">register and manage models</a>, then <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb">create Machine Learning Compute for scoring compute</a>, and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/pipeline-mpi-batch-prediction.ipynb">use Machine Learning Pipelines to deploy your model</a>.</li>
<li>...monitor your deployed models, learn about using <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb">App Insights</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/enable-data-collection-for-models-in-aks/enable-data-collection-for-models-in-aks.ipynb">model data collection</a>.</li>
</ul>
<h2 id="tutorials">Tutorials</h2>
<p>The <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials">Tutorials</a> folder contains notebooks for the tutorials described in the <a href="https://aka.ms/aml-docs">Azure Machine Learning documentation</a>.</p>
<h2 id="how-to-use-azure-ml">How to use Azure ML</h2>
<p>The <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml">How to use Azure ML</a> folder contains specific examples demonstrating the features of the Azure Machine Learning SDK</p>
<ul>
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training">Training</a> - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets</li>
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning">Training with Deep Learning</a> - Examples demonstrating how to build deep learning models using estimators and parameter sweeps</li>
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/manage-azureml-service">Manage Azure ML Service</a> - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.</li>
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning">Automated Machine Learning</a> - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models</li>
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines">Machine Learning Pipelines</a> - Examples showing how to create and use reusable pipelines for training and batch scoring</li>
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment">Deployment</a> - Examples showing how to deploy and manage machine learning models and solutions</li>
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-databricks">Azure Databricks</a> - Examples showing how to use Azure ML with Azure Databricks</li>
</ul>
<h2 id="projects-using-azure-machine-learning">Projects using Azure Machine Learning</h2>
<p>Visit following repos to see projects contributed by Azure ML users:</p>
<ul>
<li><a href="https://github.com/Microsoft/AzureML-BERT">Fine tune natural language processing models using Azure Machine Learning service</a></li>
<li><a href="https://github.com/amynic/azureml-sdk-fashion">Fashion MNIST with Azure ML SDK</a></li>
</ul>
</body>
</html>

File diff suppressed because one or more lines are too long

View File

@@ -661,7 +661,7 @@
"metadata": {
"authors": [
{
"name": "haining"
"name": "roastala"
}
],
"kernelspec": {
@@ -681,7 +681,7 @@
"pygments_lexer": "ipython3",
"version": "3.6.8"
},
"msauthor": "haining"
"msauthor": "roastala"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -592,7 +592,7 @@
"metadata": {
"authors": [
{
"name": "haining"
"name": "roastala"
}
],
"kernelspec": {

View File

@@ -60,7 +60,7 @@
"Use the following to install necessary packages if you don't already have them.\n",
"\n",
"```shell\n",
"pip install azureml-dataprep\n",
"pip install \"azureml-dataprep>=1.1.0,<1.2.0\"\n",
"```\n",
"\n",
"Import the SDK."
@@ -557,8 +557,7 @@
"import os\n",
"file_path = os.path.join(os.getcwd(), \"dflows.dprep\")\n",
"\n",
"package = dprep.Package([final_df])\n",
"package.save(file_path)"
"final_df.save(file_path)"
]
},
{

View File

@@ -137,8 +137,7 @@
"\n",
"file_path = os.path.join(os.getcwd(), \"dflows.dprep\")\n",
"\n",
"package_saved = dprep.Package.open(file_path)\n",
"dflow_prepared = package_saved.dataflows[0]\n",
"dflow_prepared = dprep.Dataflow.open(file_path)\n",
"dflow_prepared.get_profile()"
]
},