Compare commits

..

3 Commits

Author SHA1 Message Date
Roope Astala
2ac62ae1ed Merge pull request #144 from rastala/master
Version 1.0.6
2018-12-20 12:43:17 -08:00
Roope Astala
cad5d5c97c more files 2018-12-20 15:42:03 -05:00
Roope Astala
d8cf73503e update to version 1.0.6 2018-12-20 15:40:20 -05:00
69 changed files with 5260 additions and 1668 deletions

View File

@@ -7,9 +7,9 @@ which allows you to build, train, deploy and manage machine learning solutions u
allows you the choice of using local or cloud compute resources, while managing
and maintaining the complete data science workflow from the cloud.
You can find instructions on setting up notebooks [here](./NBSETUP.md)
* Read [instructions on setting up notebooks](./NBSETUP.md) to run these notebooks.
You can find full documentation for Azure Machine Learning [here](https://aka.ms/aml-docs)
* Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
## Getting Started

View File

@@ -96,7 +96,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.0.2 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.0.6 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -368,7 +368,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
"version": "3.6.5"
}
},
"nbformat": 4,

View File

@@ -13,3 +13,4 @@ As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) not
* [enable-data-collection-for-models-in-aks](./deployment/enable-data-collection-for-models-in-aks) Learn about data collection APIs for deployed model.
* [enable-app-insights-in-production-service](./deployment/enable-app-insights-in-production-service) Learn how to use App Insights with production web service.
Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).

View File

@@ -34,7 +34,8 @@ Below are the three execution environments supported by AutoML.
**NOTE**: Please create your Azure Databricks cluster as v4.x (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl_databricks]** as a PyPi library in Azure Databricks workspace.
- Download the sample notebook 16a.auto-ml-classification-local-azuredatabricks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) and import into the Azure databricks workspace.
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks).
- Download the sample notebook AutoML_Databricks_local_06.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace.
- Attach the notebook to the cluster.
<a name="localconda"></a>
@@ -57,7 +58,7 @@ jupyter notebook
```
### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose Python 3.7 or higher.
### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose 64-bit Python 3.7 or higher.
- **Note**: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda.
There's no need to install mini-conda specifically.
@@ -123,7 +124,7 @@ bash automl_setup_linux.sh
- [auto-ml-remote-batchai.ipynb](remote-batchai/auto-ml-remote-batchai.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Example of using automated ML for classification using a remote Batch AI compute for training
- Example of using automated ML for classification using remote AmlCompute for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
@@ -178,114 +179,21 @@ bash automl_setup_linux.sh
- Dataset: scikit learn's [digit dataset](https://innovate.burningman.org/datasets-page/)
- Example of using AutoML for classification using Azure Databricks as the platform for training
- [auto-ml-classification_with_tensorflow.ipynb](classification_with_tensorflow/auto-ml-classification_with_tensorflow.ipynb)
- [auto-ml-classification-with-whitelisting.ipynb](classification-with-whitelisting/auto-ml-classification-with-whitelisting.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification with whitelisting tensorflow models.checkout
- Simple example of using Auto ML for classification with whitelisting tensorflow models.
- Uses local compute for training
- [auto-ml-forecasting-a.ipynb](forecasting-a/auto-ml-forecasting-a.ipynb)
- [auto-ml-forecasting-energy-demand.ipynb](forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
- Dataset: [NYC energy demand data](forecasting-a/nyc_energy.csv)
- Example of using AutoML for training a forecasting model
- [auto-ml-forecasting-b.ipynb](forecasting-b/auto-ml-forecasting-b.ipynb)
- [auto-ml-forecasting-orange-juice-sales.ipynb](forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb)
- Dataset: [Dominick's grocery sales of orange juice](forecasting-b/dominicks_OJ.csv)
- Example of training an AutoML forecasting model on multiple time-series
<a name="documentation"></a>
# Documentation
## Table of Contents
1. [Automated ML Settings ](#automlsettings)
1. [Cross validation split options](#cvsplits)
1. [Get Data Syntax](#getdata)
1. [Data pre-processing and featurization](#preprocessing)
<a name="automlsettings"></a>
## Automated ML Settings
|Property|Description|Default|
|-|-|-|
|**primary_metric**|This is the metric that you want to optimize.<br><br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i><br><br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>| Classification: accuracy <br><br> Regression: spearman_correlation
|**iteration_timeout_minutes**|Time limit in minutes for each iteration|None|
|**iterations**|Number of iterations. In each iteration trains the data with a specific pipeline. To get the best result, use at least 100. |100|
|**n_cross_validations**|Number of cross validation splits|None|
|**validation_size**|Size of validation set as percentage of all training samples|None|
|**max_concurrent_iterations**|Max number of iterations that would be executed in parallel|1|
|**preprocess**|*True/False* <br>Setting this to *True* enables preprocessing <br>on the input to handle missing data, and perform some common feature extraction<br>*Note: If input data is Sparse you cannot use preprocess=True*|False|
|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> You can set it to *-1* to use all cores|1|
|**experiment_exit_score**|*double* value indicating the target for *primary_metric*. <br> Once the target is surpassed the run terminates|None|
|**blacklist_models**|*Array* of *strings* indicating models to ignore for Auto ML from the list of models.|None|
|**whitelist_models**|*Array* of *strings* use only models listed for Auto ML from the list of models..|None|
<a name="cvsplits"></a>
## List of models for white list/blacklist
**Classification**
<br><i>LogisticRegression</i>
<br><i>SGD</i>
<br><i>MultinomialNaiveBayes</i>
<br><i>BernoulliNaiveBayes</i>
<br><i>SVM</i>
<br><i>LinearSVM</i>
<br><i>KNN</i>
<br><i>DecisionTree</i>
<br><i>RandomForest</i>
<br><i>ExtremeRandomTrees</i>
<br><i>LightGBM</i>
<br><i>GradientBoosting</i>
<br><i>TensorFlowDNN</i>
<br><i>TensorFlowLinearClassifier</i>
<br><br>**Regression**
<br><i>ElasticNet</i>
<br><i>GradientBoosting</i>
<br><i>DecisionTree</i>
<br><i>KNN</i>
<br><i>LassoLars</i>
<br><i>SGD</i>
<br><i>RandomForest</i>
<br><i>ExtremeRandomTrees</i>
<br><i>LightGBM</i>
<br><i>TensorFlowLinearRegressor</i>
<br><i>TensorFlowDNN</i>
## Cross validation split options
### K-Folds Cross Validation
Use *n_cross_validations* setting to specify the number of cross validations. The training data set will be randomly split into *n_cross_validations* folds of equal size. During each cross validation round, one of the folds will be used for validation of the model trained on the remaining folds. This process repeats for *n_cross_validations* rounds until each fold is used once as validation set. Finally, the average scores accross all *n_cross_validations* rounds will be reported, and the corresponding model will be retrained on the whole training data set.
### Monte Carlo Cross Validation (a.k.a. Repeated Random Sub-Sampling)
Use *validation_size* to specify the percentage of the training data set that should be used for validation, and use *n_cross_validations* to specify the number of cross validations. During each cross validation round, a subset of size *validation_size* will be randomly selected for validation of the model trained on the remaining data. Finally, the average scores accross all *n_cross_validations* rounds will be reported, and the corresponding model will be retrained on the whole training data set.
### Custom train and validation set
You can specify seperate train and validation set either through the get_data() or directly to the fit method.
<a name="getdata"></a>
## get_data() syntax
The *get_data()* function can be used to return a dictionary with these values:
|Key|Type|Dependency|Mutually Exclusive with|Description|
|:-|:-|:-|:-|:-|
|X|Pandas Dataframe or Numpy Array|y|data_train, label, columns|All features to train with|
|y|Pandas Dataframe or Numpy Array|X|label|Label data to train with. For classification, this should be an array of integers. |
|X_valid|Pandas Dataframe or Numpy Array|X, y, y_valid|data_train, label|*Optional* All features to validate with. If this is not specified, X is split between train and validate|
|y_valid|Pandas Dataframe or Numpy Array|X, y, X_valid|data_train, label|*Optional* The label data to validate with. If this is not specified, y is split between train and validate|
|sample_weight|Pandas Dataframe or Numpy Array|y|data_train, label, columns|*Optional* A weight value for each label. Higher values indicate that the sample is more important.|
|sample_weight_valid|Pandas Dataframe or Numpy Array|y_valid|data_train, label, columns|*Optional* A weight value for each validation label. Higher values indicate that the sample is more important. If this is not specified, sample_weight is split between train and validate|
|data_train|Pandas Dataframe|label|X, y, X_valid, y_valid|All data (features+label) to train with|
|label|string|data_train|X, y, X_valid, y_valid|Which column in data_train represents the label|
|columns|Array of strings|data_train||*Optional* Whitelist of columns to use for features|
|cv_splits_indices|Array of integers|data_train||*Optional* List of indexes to split the data for cross validation|
<a name="preprocessing"></a>
## Data pre-processing and featurization
If you use `preprocess=True`, the following data preprocessing steps are performed automatically for you:
1. Dropping high cardinality or no variance features
- Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs).
2. Missing value imputation
- For numerical features, missing values are imputed with average of values in the column.
- For categorical features, missing values are imputed with most frequent value.
3. Generating additional features
- For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.
- For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer.
4. Transformations and encodings
- Numeric features with very few unique values are transformed into categorical features.
See [Configure automated machine learning experiments](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-auto-train) to learn how more about the the settings and features available for automated machine learning experiments.
<a name="pythoncommand"></a>
# Running using python command
@@ -302,8 +210,9 @@ The main code of the file must be indented so that it is under this condition.
# Troubleshooting
## automl_setup fails
1. On windows, make sure that you are running automl_setup from an Anconda Prompt window rather than a regular cmd window. You can launch the "Anaconda Prompt" window by hitting the Start button and typing "Anaconda Prompt". If you don't see the application "Anaconda Prompt", you might not have conda or mini conda installed. In that case, you can install it [here](https://conda.io/miniconda.html)
2. Check that you have conda 4.4.10 or later. You can check the version with the command `conda -V`. If you have a previous version installed, you can update it using the command: `conda update conda`.
3. Pass a new name as the first parameter to automl_setup so that it creates a new conda environment. You can view existing conda environments using `conda env list` and remove them with `conda env remove -n <environmentname>`.
2. Check that you have conda 64-bit installed rather than 32-bit. You can check this with the command `conda info`. The `platform` should be `win-64` for Windows or `osx-64` for Mac.
3. Check that you have conda 4.4.10 or later. You can check the version with the command `conda -V`. If you have a previous version installed, you can update it using the command: `conda update conda`.
4. Pass a new name as the first parameter to automl_setup so that it creates a new conda environment. You can view existing conda environments using `conda env list` and remove them with `conda env remove -n <environmentname>`.
## configuration.ipynb fails
1) For local conda, make sure that you have susccessfully run automl_setup first.

View File

@@ -21,16 +21,8 @@ if not errorlevel 1 (
call conda activate %conda_env_name% 2>nul:
if errorlevel 1 goto ErrorExit
call pip install psutil
call python -m ipykernel install --user --name %conda_env_name% --display-name "Python (%conda_env_name%)"
call jupyter nbextension install --py azureml.widgets --user
if errorlevel 1 goto ErrorExit
call jupyter nbextension enable --py azureml.widgets --user
if errorlevel 1 goto ErrorExit
echo.
echo.
echo ***************************************
@@ -39,7 +31,7 @@ echo ***************************************
echo.
echo Starting jupyter notebook - please run the configuration notebook
echo.
jupyter notebook --log-level=50
jupyter notebook --log-level=50 --notebook-dir='..\..'
goto End

View File

@@ -27,8 +27,6 @@ else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension install --py azureml.widgets --user &&
jupyter nbextension enable --py azureml.widgets --user &&
echo "" &&
echo "" &&
echo "***************************************" &&
@@ -37,7 +35,7 @@ else
echo "" &&
echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
if [ $? -gt 0 ]

View File

@@ -28,8 +28,6 @@ else
source activate $CONDA_ENV_NAME &&
conda install lightgbm -c conda-forge -y &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension install --py azureml.widgets --user &&
jupyter nbextension enable --py azureml.widgets --user &&
pip install numpy==1.15.3
echo "" &&
echo "" &&
@@ -39,7 +37,7 @@ else
echo "" &&
echo "Starting jupyter notebook - please run the configuration notebook" &&
echo "" &&
jupyter notebook --log-level=50
jupyter notebook --log-level=50 --notebook-dir '../..'
fi
if [ $? -gt 0 ]

View File

@@ -1,568 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Classification local on Azure DataBricks\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create Azure Machine Learning Workspace object and initialize your notebook directory to easily reload this object from a configuration file.\n",
"2. Create an `Experiment` in an existing `Workspace`.\n",
"3. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model using AzureDataBricks.\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"\n",
"Prerequisites:\n",
"Before running this notebook, run the install instructions described in README.md."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register Machine Learning Services Resource Provider\n",
"Microsoft.MachineLearningServices only needs to be registed once in the subscription. To register it:\n",
"Start the Azure portal.\n",
"Select your All services and then Subscription.\n",
"Select the subscription that you want to use.\n",
"Click on Resource providers\n",
"Click the Register link next to Microsoft.MachineLearningServices"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<SubscriptionId>\"\n",
"resource_group = \"myrg\"\n",
"workspace_name = \"myws\"\n",
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" exist_ok=True)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Folder to Host Sample Projects\n",
"Finally, create a folder where all the sample projects will be hosted."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"sample_projects_folder = './sample_projects'\n",
"\n",
"if not os.path.isdir(sample_projects_folder):\n",
" os.mkdir(sample_projects_folder)\n",
" \n",
"print('Sample projects will be created in {}.'.format(sample_projects_folder))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"import time\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-classification'\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Training Data Using DataPrep\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.dataprep as dprep\n",
"# You can use `auto_read_file` which intelligently figures out delimiters and datatypes of a file.\n",
"# The data referenced here was pulled from `sklearn.datasets.load_digits()`.\n",
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
"X = dprep.auto_read_file(simple_example_data_root + 'X.csv').skip(1) # Remove the header row.\n",
"\n",
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter)\n",
"# and convert column types manually.\n",
"# Here we read a comma delimited file and convert all columns to integers.\n",
"y = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Review the Data Preparation Result\n",
"You can peek the result of a Dataflow at any range using skip(i) and head(j). Doing so evaluates only j records for all the steps in the Dataflow, which makes it fast even against large datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X.skip(1).head(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**spark_context**|Spark Context object.|\n",
"|**max_cuncurrent_iterations**|Maximum number of iterations to execute in parallel. This should be less than the number of cores on the ADB..|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"automl_settings = {\n",
" \"iteration_timeout_minutes\": 10,\n",
" \"iterations\": 10,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"preprocess\": False,\n",
" \"max_concurrent_iterations\": 2,\n",
" \"verbosity\": logging.INFO,\n",
" \"spark_context\": sc\n",
"}\n",
" \n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder, \n",
" X = X, \n",
" y = y,\n",
" **automl_settings\n",
" )\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Portal URL for Monitoring Runs\n",
"\n",
"The following will provide a link to the web interface to explore individual run details and status."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(local_run.get_portal_url())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"The following will show the child runs and waits for the parent run to complete."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `log_loss` value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"log_loss\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Model from a Specific Iteration\n",
"Show the run and the model from the third iteration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"iteration = 3\n",
"third_run, third_model = local_run.get_output(iteration = iteration)\n",
"print(third_run)\n",
"print(third_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"\n",
"#### Load Test Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"digits = datasets.load_digits()\n",
"X_test = digits.data[:10, :]\n",
"y_test = digits.target[:10]\n",
"images = digits.images[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Fitted Model\n",
"We will try to predict 2 digits and see how our model works."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Randomly select digits and test.\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
" print(index)\n",
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
" label = y_test[index]\n",
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
" fig = plt.figure(1, figsize = (3,3))\n",
" ax1 = fig.add_axes((0,0,.8,.8))\n",
" ax1.set_title(title)\n",
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
" display(fig)"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python [conda env:AutoML_ADB]",
"language": "python",
"name": "conda-env-AutoML_ADB-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"name": "auto-ml-classification-local-adb",
"notebookId": 3742842704905931
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -13,11 +13,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Classification with Deployment\n",
"# Automated Machine Learning\n",
"_**Classification with Deployment**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Train](#Train)\n",
"1. [Deploy](#Deploy)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for a simple classification problem and deploy it to an Azure Container Instance (ACI).\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an experiment using an existing workspace.\n",
@@ -27,14 +42,14 @@
"5. Register the model.\n",
"6. Create a container image.\n",
"7. Create an Azure Container Instance (ACI) service.\n",
"8. Test the ACI service.\n"
"8. Test the ACI service."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -94,8 +109,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -113,7 +126,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"## Train\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
@@ -156,8 +169,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
@@ -171,10 +182,21 @@
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy\n",
"\n",
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
@@ -442,7 +464,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test a Web Service"
"## Test"
]
},
{

View File

@@ -13,11 +13,27 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Classification with Local Compute with Tensorflow DNNClassifier and LinearClassifier using whitelist models\n",
"# Automated Machine Learning\n",
"_**Classification using whitelist models**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.\n",
"This trains the model exclusively on tensorflow based models.\n",
"\n",
@@ -26,14 +42,14 @@
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model on a whilelisted models using local compute. \n",
"4. Explore the results.\n",
"5. Test the best fitted model.\n"
"5. Test the best fitted model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -70,8 +86,8 @@
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-classification'\n",
"project_folder = './sample_projects/automl-local-classification'\n",
"experiment_name = 'automl-local-whitelist'\n",
"project_folder = './sample_projects/automl-local-whitelist'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
@@ -91,8 +107,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -110,7 +124,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Training Data\n",
"## Data\n",
"\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
@@ -134,7 +148,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
@@ -147,7 +161,8 @@
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"|**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#configure-your-experiment-settings).|"
]
},
{
@@ -174,8 +189,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
@@ -195,14 +208,14 @@
"metadata": {},
"outputs": [],
"source": [
"local_run\n"
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
"## Results"
]
},
{
@@ -316,7 +329,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"## Test\n",
"\n",
"#### Load Test Data"
]

View File

@@ -13,25 +13,42 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Classification with Local Compute\n",
"# Automated Machine Learning\n",
"_**Classification with Local Compute**_\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model.\n"
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Introduction\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -89,8 +106,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -108,7 +123,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Training Data\n",
"## Data\n",
"\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
@@ -132,7 +147,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
@@ -170,8 +185,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
@@ -213,20 +226,11 @@
" iterations = 5)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
"## Results"
]
},
{
@@ -340,7 +344,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"## Test \n",
"\n",
"#### Load Test Data"
]

View File

@@ -1,154 +1,154 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning Configuration\n",
"\n",
"In this example you will create an Azure Machine Learning `Workspace` object and initialize your notebook directory to easily reload this object from a configuration file. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<subscription_id>\"\n",
"resource_group = \"myrg\"\n",
"workspace_name = \"myws\"\n",
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning Configuration\n",
"\n",
"In this example you will create an Azure Machine Learning `Workspace` object and initialize your notebook directory to easily reload this object from a configuration file. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<subscription_id>\"\n",
"resource_group = \"myrg\"\n",
"workspace_name = \"myws\"\n",
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -13,10 +13,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Prepare Data using `azureml.dataprep` for Remote Execution (DSVM)\n",
"# Automated Machine Learning\n",
"_**Prepare Data using `azureml.dataprep` for Remote Execution (DSVM)**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we showcase how you can use the `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone; full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Define data loading and preparation steps in a `Dataflow` using `azureml.dataprep`.\n",
@@ -28,7 +44,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Compatibility\n",
"## Setup\n",
"\n",
"Currently, Data Prep only supports __Ubuntu 16__ and __Red Hat Enterprise Linux 7__. We are working on supporting more linux distros."
]
@@ -37,8 +53,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -56,8 +70,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
@@ -112,7 +124,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading Data using DataPrep"
"## Data"
]
},
{
@@ -136,8 +148,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Review the Data Preparation Result\n",
"\n",
"You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large datasets."
]
},
@@ -154,7 +164,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"## Train\n",
"\n",
"This creates a general AutoML settings object applicable for both local and remote runs."
]
@@ -175,13 +185,6 @@
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remote Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -262,11 +265,20 @@
"remote_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
"## Results"
]
},
{
@@ -380,7 +392,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"## Test\n",
"\n",
"#### Load Test Data"
]

View File

@@ -13,10 +13,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Prepare Data using `azureml.dataprep` for Local Execution\n",
"# Automated Machine Learning\n",
"_**Prepare Data using `azureml.dataprep` for Local Execution**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we showcase how you can use the `azureml.dataprep` SDK to load and prepare data for AutoML. `azureml.dataprep` can also be used standalone; full documentation can be found [here](https://github.com/Microsoft/PendletonDocs).\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Define data loading and preparation steps in a `Dataflow` using `azureml.dataprep`.\n",
@@ -28,7 +44,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Compatibility\n",
"## Setup\n",
"\n",
"Currently, Data Prep only supports __Ubuntu 16__ and __Red Hat Enterprise Linux 7__. We are working on supporting more linux distros."
]
@@ -37,8 +53,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -56,8 +70,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
@@ -110,7 +122,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading Data using DataPrep"
"## Data"
]
},
{
@@ -134,7 +146,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Review the Data Preparation Result\n",
"### Review the Data Preparation Result\n",
"\n",
"You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only `j` records for all the steps in the Dataflow, which makes it fast even against large datasets."
]
@@ -152,7 +164,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"## Train\n",
"\n",
"This creates a general AutoML settings object applicable for both local and remote runs."
]
@@ -173,13 +185,6 @@
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -211,11 +216,20 @@
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
"## Results"
]
},
{
@@ -329,7 +343,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"## Test\n",
"\n",
"#### Load Test Data"
]

View File

@@ -13,24 +13,38 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Exploring Previous Runs\n",
"# Automated Machine Learning\n",
"_**Exploring Previous Runs**_\n",
"\n",
"In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. List all experiments in a workspace.\n",
"2. List all AutoML runs in an experiment.\n",
"3. Get details for an AutoML run, including settings, run widget, and all metrics.\n",
"4. Download a fitted pipeline for any iteration.\n"
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Explore](#Explore)\n",
"1. [Download](#Download)\n",
"1. [Register](#Register)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# List all AutoML Experiments in a Workspace"
"## Introduction\n",
"In this example we present some examples on navigating previously executed runs. We also show how you can download a fitted model for any previous run.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. List all experiments in a workspace.\n",
"2. List all AutoML runs in an experiment.\n",
"3. Get details for an AutoML run, including settings, run widget, and all metrics.\n",
"4. Download a fitted pipeline for any iteration."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
@@ -64,29 +78,13 @@
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"experiment_list = Experiment.list(workspace=ws)\n",
"\n",
"summary_df = pd.DataFrame(index = ['No of Runs'])\n",
"pattern = re.compile('^AutoML_[^_]*$')\n",
"for experiment in experiment_list:\n",
" all_runs = list(experiment.get_runs())\n",
" automl_runs = []\n",
" for run in all_runs:\n",
" if(pattern.match(run.id)):\n",
" automl_runs.append(run) \n",
" summary_df[experiment.name] = [len(automl_runs)]\n",
" \n",
"pd.set_option('display.max_colwidth', -1)\n",
"summary_df.T"
"ws = Workspace.from_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -104,7 +102,38 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# List AutoML runs for an experiment\n",
"## Explore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### List Experiments"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_list = Experiment.list(workspace=ws)\n",
"\n",
"summary_df = pd.DataFrame(index = ['No of Runs'])\n",
"for experiment in experiment_list:\n",
" automl_runs = list(experiment.get_runs(type='automl'))\n",
" summary_df[experiment.name] = [len(automl_runs)]\n",
" \n",
"pd.set_option('display.max_colwidth', -1)\n",
"summary_df.T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### List runs for an experiment\n",
"Set `experiment_name` to any experiment name from the result of the Experiment.list cell to load the AutoML runs."
]
},
@@ -118,21 +147,19 @@
"\n",
"proj = ws.experiments[experiment_name]\n",
"summary_df = pd.DataFrame(index = ['Type', 'Status', 'Primary Metric', 'Iterations', 'Compute', 'Name'])\n",
"pattern = re.compile('^AutoML_[^_]*$')\n",
"all_runs = list(proj.get_runs(properties={'azureml.runsource': 'automl'}))\n",
"automl_runs = list(proj.get_runs(type='automl'))\n",
"automl_runs_project = []\n",
"for run in all_runs:\n",
" if(pattern.match(run.id)):\n",
" properties = run.get_properties()\n",
" tags = run.get_tags()\n",
" amlsettings = eval(properties['RawAMLSettingsString'])\n",
" if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
" else:\n",
" iterations = properties['num_iterations']\n",
" summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']]\n",
" if run.get_details()['status'] == 'Completed':\n",
" automl_runs_project.append(run.id)\n",
"for run in automl_runs:\n",
" properties = run.get_properties()\n",
" tags = run.get_tags()\n",
" amlsettings = eval(properties['RawAMLSettingsString'])\n",
" if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
" else:\n",
" iterations = properties['num_iterations']\n",
" summary_df[run.id] = [amlsettings['task_type'], run.get_details()['status'], properties['primary_metric'], iterations, properties['target'], amlsettings['name']]\n",
" if run.get_details()['status'] == 'Completed':\n",
" automl_runs_project.append(run.id)\n",
" \n",
"from IPython.display import HTML\n",
"projname_html = HTML(\"<h3>{}</h3>\".format(proj.name))\n",
@@ -146,7 +173,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Get details for an AutoML run\n",
"### Get details for a run\n",
"\n",
"Copy the project name and run id from the previous cell output to find more details on a particular run."
]
@@ -207,14 +234,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Download fitted models"
"## Download"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download the Best Model for Any Given Metric"
"### Download the Best Model for Any Given Metric"
]
},
{
@@ -232,7 +259,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download the Model for Any Given Iteration"
"### Download the Model for Any Given Iteration"
]
},
{
@@ -250,7 +277,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register fitted model for deployment\n",
"## Register"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register fitted model for deployment\n",
"If neither `metric` nor `iteration` are specified in the `register_model` call, the iteration with the best primary metric is registered."
]
},
@@ -270,7 +304,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the Best Model for Any Given Metric"
"### Register the Best Model for Any Given Metric"
]
},
{
@@ -290,7 +324,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the Model for Any Given Iteration"
"### Register the Model for Any Given Iteration"
]
},
{

View File

@@ -13,11 +13,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Energy Demand Forecasting\n",
"# Automated Machine Learning\n",
"_**Energy Demand Forecasting**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example, we show how AutoML can be used for energy demand forecasting.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n",
@@ -31,7 +44,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"## Setup\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
@@ -92,7 +105,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Read Data\n",
"## Data\n",
"Read energy demanding data from file, and preview data."
]
},
@@ -170,7 +183,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"## Train\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
@@ -217,8 +230,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
@@ -232,6 +243,15 @@
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -13,11 +13,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Orange Juice Sales Forecasting\n",
"# Automated Machine Learning\n",
"_**Orange Juice Sales Forecasting**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example, we use AutoML to find and tune a time-series forecasting model.\n",
"\n",
"Make sure you have executed the [configuration notebook](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook, you will:\n",
"1. Create an Experiment in an existing Workspace\n",
@@ -25,7 +38,6 @@
"3. Find and train a forecasting model using local compute\n",
"4. Evaluate the performance of the model\n",
"\n",
"## Sample Data\n",
"The examples in the follow code samples use the [University of Chicago's Dominick's Finer Foods dataset](https://research.chicagobooth.edu/kilts/marketing-databases/dominicks) to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area."
]
},
@@ -33,7 +45,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"## Setup\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model. "
]
@@ -92,7 +104,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Read Data\n",
"## Data\n",
"You are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type."
]
},
@@ -206,7 +218,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an AutoMLConfig\n",
"## Train\n",
"\n",
"The AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, and the training and validation data. \n",
"\n",
@@ -256,8 +268,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.\n",
"Information from each iteration will be printed to the console."
]
@@ -271,6 +281,15 @@
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -13,11 +13,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Blacklisting Models, Early Termination, and Handling Missing Data\n",
"# Automated Machine Learning\n",
"_**Blacklisting Models, Early Termination, and Handling Missing Data**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for handling missing values in data. We also provide a stopping metric indicating a target for the primary metrics so that AutoML can terminate the run without necessarly going through all the iterations. Finally, if you want to avoid a certain pipeline, we allow you to specify a blacklist of algorithms that AutoML will ignore for this run.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
@@ -29,14 +44,14 @@
"In addition this notebook showcases the following features\n",
"- **Blacklisting** certain pipelines\n",
"- Specifying **target metrics** to indicate stopping criteria\n",
"- Handling **missing data** in the input\n"
"- Handling **missing data** in the input"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -94,8 +109,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -113,7 +126,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating missing data"
"## Data"
]
},
{
@@ -153,7 +166,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment. This includes setting `experiment_exit_score`, which should cause the run to complete before the `iterations` count is reached.\n",
"\n",
@@ -197,8 +210,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
@@ -212,11 +223,20 @@
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
"## Results"
]
},
{
@@ -324,7 +344,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the best Fitted Model"
"## Test"
]
},
{

View File

@@ -13,25 +13,39 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Explain classification model and visualize the explanation\n",
"# Automated Machine Learning\n",
"_**Explain classification model and visualize the explanation**_\n",
"\n",
"In this example we use the sklearn's [iris dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) to showcase how you can use the AutoML Classifier for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Instantiating AutoMLConfig\n",
"3. Training the Model using local compute and explain the model\n",
"4. Visualization model's feature importance in widget\n",
"5. Explore best model's explanation\n"
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"## Introduction\n",
"In this example we use the sklearn's [iris dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) to showcase how you can use the AutoML Classifier for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Instantiating AutoMLConfig\n",
"3. Training the Model using local compute and explain the model\n",
"4. Visualization model's feature importance in widget\n",
"5. Explore best model's explanation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
@@ -85,8 +99,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
@@ -104,7 +116,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Iris Data Set"
"## Data"
]
},
{
@@ -136,7 +148,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"## Train\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
@@ -178,8 +190,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
@@ -193,11 +203,20 @@
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the results"
"## Results"
]
},
{

View File

@@ -13,25 +13,40 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML: Regression with Local Compute\n",
"# Automated Machine Learning\n",
"_**Regression with Local Compute**_\n",
"\n",
"In this example we use the scikit-learn's [diabetes dataset](http://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) to showcase how you can use AutoML for a simple regression problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model.\n"
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Introduction\n",
"In this example we use the scikit-learn's [diabetes dataset](http://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) to showcase how you can use AutoML for a simple regression problem.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model using local compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -89,8 +104,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -108,7 +121,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Training Data\n",
"## Data\n",
"This uses scikit-learn's [load_diabetes](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html) method."
]
},
@@ -120,8 +133,6 @@
"source": [
"# Load the diabetes dataset, a well-known built-in small dataset that comes with scikit-learn.\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"X, y = load_diabetes(return_X_y = True)\n",
@@ -135,7 +146,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
@@ -173,8 +184,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
@@ -201,7 +210,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
"## Results"
]
},
{
@@ -315,7 +324,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model"
"## Test"
]
},
{

View File

@@ -13,11 +13,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Remote Execution using attach\n",
"# Automated Machine Learning\n",
"_**Remote Execution using attach**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML to handle text data with remote attach.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
@@ -33,14 +48,14 @@
"- **Cancellation** of individual iterations or the entire run\n",
"- Retrieving models for any iteration or logged metric\n",
"- Specifying AutoML settings as `**kwargs`\n",
"- Handling **text** data using the `preprocess` flag\n"
"- Handling **text** data using the `preprocess` flag"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -77,8 +92,8 @@
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = 'automl-remote-dsvm-blobstore'\n",
"project_folder = './sample_projects/automl-remote-dsvm-blobstore'\n",
"experiment_name = 'automl-remote-attach'\n",
"project_folder = './sample_projects/automl-remote-attach'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
@@ -98,8 +113,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -117,7 +130,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Attach a Remote Linux DSVM\n",
"### Attach a Remote Linux DSVM\n",
"To use a remote Docker compute target:\n",
"1. Create a Linux DSVM in Azure, following these [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.\n",
"2. Enter the IP address, user name and password below.\n",
@@ -184,7 +197,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"## Data\n",
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"In this example, the `get_data()` function returns a [dictionary](README.md#getdata)."
]
@@ -232,7 +245,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"## Train\n",
"\n",
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
"\n",
@@ -277,8 +290,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models <a class=\"anchor\" id=\"Training-the-model-Remote-DSVM\"></a>\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run."
]
},
@@ -291,11 +302,20 @@
"remote_run = experiment.submit(automl_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the Results <a class=\"anchor\" id=\"Exploring-the-Results-Remote-DSVM\"></a>\n",
"## Results\n",
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
@@ -329,7 +349,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pre-process cache cleanup\n",
"### Pre-process cache cleanup\n",
"The preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell"
]
},
@@ -372,7 +392,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cancelling Runs\n",
"### Cancelling Runs\n",
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
]
},
@@ -448,7 +468,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n"
"## Test"
]
},
{

View File

@@ -13,17 +13,32 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Remote Execution using Batch AI\n",
"# Automated Machine Learning\n",
"_**Remote Execution using AmlCompute**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Attach an existing Batch AI compute to a workspace.\n",
"2. Create or Attach existing AmlCompute to a workspace.\n",
"3. Configure AutoML using `AutoMLConfig`.\n",
"4. Train the model using Batch AI.\n",
"4. Train the model using AmlCompute\n",
"5. Explore the results.\n",
"6. Test the best fitted model.\n",
"\n",
@@ -32,14 +47,14 @@
"- **Asynchronous** tracking of progress\n",
"- **Cancellation** of individual iterations or the entire run\n",
"- Retrieving models for any iteration or logged metric\n",
"- Specifying AutoML settings as `**kwargs`\n"
"- Specifying AutoML settings as `**kwargs`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -76,8 +91,8 @@
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = 'automl-remote-batchai'\n",
"project_folder = './sample_projects/automl-remote-batchai'\n",
"experiment_name = 'automl-remote-amlcompute'\n",
"project_folder = './sample_projects/automl-remote-amlcompute'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
@@ -97,8 +112,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -116,12 +129,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Batch AI Cluster\n",
"The cluster is created as Machine Learning Compute and will appear under your workspace.\n",
"### Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
"\n",
"**Note:** The creation of the Batch AI cluster can take over 10 minutes, please be patient.\n",
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"\n",
"As with other Azure services, there are limits on certain resources (e.g. Batch AI cluster size) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
@@ -134,15 +147,15 @@
"from azureml.core.compute import ComputeTarget\n",
"\n",
"# Choose a name for your cluster.\n",
"batchai_cluster_name = \"automlcl\"\n",
"amlcompute_cluster_name = \"automlcl\"\n",
"\n",
"found = False\n",
"# Check if this compute target already exists in the workspace.\n",
"cts = ws.compute_targets\n",
"if batchai_cluster_name in cts and cts[batchai_cluster_name].type == 'BatchAI':\n",
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
" found = True\n",
" print('Found existing compute target.')\n",
" compute_target = cts[batchai_cluster_name]\n",
" compute_target = cts[amlcompute_cluster_name]\n",
" \n",
"if not found:\n",
" print('Creating a new compute target...')\n",
@@ -151,13 +164,13 @@
" max_nodes = 6)\n",
"\n",
" # Create the cluster.\n",
" compute_target = ComputeTarget.create(ws, batchai_cluster_name, provisioning_config)\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
" \n",
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
" compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
" \n",
" # For a more detailed view of current Batch AI cluster status, use the 'status' property."
" # For a more detailed view of current AmlCompute status, use the 'status' property."
]
},
{
@@ -172,7 +185,7 @@
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to the Batch AI cluster\n",
"# Set compute target to AmlCompute\n",
"conda_run_config.target = compute_target\n",
"conda_run_config.environment.docker.enabled = True\n",
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
@@ -185,7 +198,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"## Data\n",
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"In this example, the `get_data()` function returns data using scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
@@ -225,11 +238,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"## Train\n",
"\n",
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
"\n",
"**Note:** When using Batch AI, you can't pass Numpy arrays directly to the fit method.\n",
"**Note:** When using AmlCompute, you can't pass Numpy arrays directly to the fit method.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
@@ -269,8 +282,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.\n",
"In this example, we specify `show_output = False` to suppress console output while the run is in progress."
]
@@ -284,11 +295,20 @@
"remote_run = experiment.submit(automl_config, show_output = False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results\n",
"## Results\n",
"\n",
"#### Loading executed runs\n",
"In case you need to load a previously executed run, enable the cell below and replace the `run_id` value."
@@ -373,7 +393,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cancelling Runs\n",
"### Cancelling Runs\n",
"\n",
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
]
@@ -455,7 +475,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n",
"## Test\n",
"\n",
"#### Load Test Data"
]

View File

@@ -13,26 +13,40 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Remote Execution with DataStore\n",
"# Automated Machine Learning\n",
"_**Remote Execution with DataStore**_\n",
"\n",
"This sample accesses a data file on a remote DSVM through DataStore. Advantages of using data store are:\n",
"1. DataStore secures the access details.\n",
"2. DataStore supports read, write to blob and file store\n",
"3. AutoML natively supports copying data from DataStore to DSVM\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Storing data in DataStore.\n",
"2. get_data returning data from DataStore.\n",
"\n"
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"## Introduction\n",
"This sample accesses a data file on a remote DSVM through DataStore. Advantages of using data store are:\n",
"1. DataStore secures the access details.\n",
"2. DataStore supports read, write to blob and file store\n",
"3. AutoML natively supports copying data from DataStore to DSVM\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Storing data in DataStore.\n",
"2. get_data returning data from DataStore."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
@@ -73,7 +87,7 @@
"# choose a name for experiment\n",
"experiment_name = 'automl-remote-datastore-file'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-remote-dsvm-file'\n",
"project_folder = './sample_projects/automl-remote-datastore-file'\n",
"\n",
"experiment=Experiment(ws, experiment_name)\n",
"\n",
@@ -93,8 +107,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
]
},
@@ -112,7 +124,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Remote Linux DSVM\n",
"### Create a Remote Linux DSVM\n",
"Note: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
"\n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you can switch to a different port (such as 5022), you can append the port number to the address. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on this."
@@ -144,7 +156,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Copy data file to local\n",
"## Data\n",
"\n",
"### Copy data file to local\n",
"\n",
"Download the data file.\n"
]
@@ -186,7 +200,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Upload data to the cloud"
"### Upload data to the cloud"
]
},
{
@@ -224,7 +238,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure & Run\n",
"### Configure & Run\n",
"\n",
"First let's create a DataReferenceConfigruation object to inform the system what data folder to download to the compute target.\n",
"The path_on_compute should be an absolute path to ensure that the data files are downloaded only once. The get_data method should use this same path to access the data files."
@@ -269,7 +283,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"### Create Get Data File\n",
"For remote executions you should author a get_data.py file containing a get_data() function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"\n",
"The *get_data()* function returns a [dictionary](README.md#getdata).\n",
@@ -308,7 +322,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
"## Train\n",
"\n",
"You can specify automl_settings as **kwargs** as well. Also note that you can use the get_data() symantic for local excutions too. \n",
"\n",
@@ -355,8 +369,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Models <a class=\"anchor\" id=\"Training-the-model-Remote-DSVM\"></a>\n",
"\n",
"For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets/models even when the experiment is running to retreive the best model up to that point. Once you are satisfied with the model you can cancel a particular iteration or the whole run."
]
},
@@ -369,11 +381,20 @@
"remote_run = experiment.submit(automl_config, show_output=False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exploring the Results <a class=\"anchor\" id=\"Exploring-the-Results-Remote-DSVM\"></a>\n",
"## Results\n",
"#### Widget for monitoring runs\n",
"\n",
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
@@ -433,7 +454,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Canceling Runs\n",
"### Canceling Runs\n",
"You can cancel ongoing remote runs using the *cancel()* and *cancel_iteration()* functions"
]
},
@@ -454,7 +475,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pre-process cache cleanup\n",
"### Pre-process cache cleanup\n",
"The preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell"
]
},
@@ -523,7 +544,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Best Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n"
"## Test\n"
]
},
{

View File

@@ -13,11 +13,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Remote Execution using DSVM (Ubuntu)\n",
"# Automated Machine Learning\n",
"_**Remote Execution using DSVM (Ubuntu)**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you wiil learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
@@ -32,14 +47,14 @@
"- **Asynchronous** tracking of progress\n",
"- **Cancellation** of individual iterations or the entire run\n",
"- Retrieving models for any iteration or logged metric\n",
"- Specifying AutoML settings as `**kwargs`\n"
"- Specifying AutoML settings as `**kwargs`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -77,8 +92,8 @@
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = 'automl-remote-dsvm4'\n",
"project_folder = './sample_projects/automl-remote-dsvm4'\n",
"experiment_name = 'automl-remote-dsvm'\n",
"project_folder = './sample_projects/automl-remote-dsvm'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
@@ -98,8 +113,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -117,7 +130,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Remote Linux DSVM\n",
"### Create a Remote Linux DSVM\n",
"**Note:** If creation fails with a message about Marketplace purchase eligibilty, start creation of a DSVM through the [Azure portal](https://portal.azure.com), and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled this setting, you can exit the portal without actually creating the DSVM, and creation of the DSVM through the notebook should work.\n"
]
},
@@ -135,7 +148,7 @@
" print('Found an existing DSVM.')\n",
"except:\n",
" print('Creating a new DSVM.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2s_v3\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)\n",
" print(\"Waiting one minute for ssh to be accessible\")\n",
@@ -165,7 +178,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"## Data\n",
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"In this example, the `get_data()` function returns data using scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
@@ -205,7 +218,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML <a class=\"anchor\" id=\"Instantiate-AutoML-Remote-DSVM\"></a>\n",
"## Train\n",
"\n",
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
"\n",
@@ -256,8 +269,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.\n",
"\n",
"In this example, we specify `show_output = False` to suppress console output while the run is in progress."
@@ -272,11 +283,20 @@
"remote_run = experiment.submit(automl_config, show_output = False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results\n",
"## Results\n",
"\n",
"#### Loading Executed Runs\n",
"In case you need to load a previously executed run, enable the cell below and replace the `run_id` value."
@@ -352,7 +372,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Cancelling Runs\n",
"### Cancelling Runs\n",
"\n",
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
]
@@ -434,7 +454,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n",
"## Test\n",
"\n",
"#### Load Test Data"
]

View File

@@ -13,20 +13,33 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Sample Weight\n",
"# Automated Machine Learning\n",
"_**Sample Weight**_\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use sample weight with AutoML. Sample weight is used where some sample values are more important than others.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to configure AutoML to use `sample_weight` and you will see the difference sample weight makes to the test results.\n"
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Train](#Train)\n",
"1. [Test](#Test)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Introduction\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use sample weight with AutoML. Sample weight is used where some sample values are more important than others.\n",
"\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to configure AutoML to use `sample_weight` and you will see the difference sample weight makes to the test results."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -87,8 +100,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -106,7 +117,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"## Train\n",
"\n",
"Instantiate two `AutoMLConfig` objects. One will be used with `sample_weight` and one without."
]
@@ -153,8 +164,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment objects and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
@@ -176,7 +185,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"## Test\n",
"\n",
"#### Load Test Data"
]

View File

@@ -13,11 +13,25 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning: Train Test Split and Handling Sparse Data\n",
"# Automated Machine Learning\n",
"_**Train Test Split and Handling Sparse Data**_\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Train](#Train)\n",
"1. [Results](#Results)\n",
"1. [Test](#Test)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction\n",
"In this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML for handling sparse data and how to specify custom cross validations splits.\n",
"\n",
"Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.\n",
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
@@ -35,7 +49,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"## Setup\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
@@ -94,8 +108,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
@@ -113,7 +125,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating Sparse Data"
"## Data"
]
},
{
@@ -155,7 +167,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"## Train\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
@@ -197,8 +209,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
@@ -212,11 +222,20 @@
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
"## Results"
]
},
{
@@ -324,7 +343,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the Best Fitted Model"
"## Test"
]
},
{

View File

@@ -1,179 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Azure ML & Azure Databricks notebooks by Parashar Shah.\n",
"\n",
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n",
"\n",
"**azureml-sdk**\n",
"* Source: Upload Python Egg or PyPi\n",
"* PyPi Name: `azureml-sdk[databricks]`\n",
"* Select Install Library"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"# Check core SDK version number - based on build number of preview/master.\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![04ACI](files/tables/image2b.JPG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please specify the Azure subscription Id, resource group name, workspace name, and the region in which you want to create the Azure Machine Learning Workspace.\n",
"\n",
"You can get the value of your Azure subscription ID from the Azure Portal, and then selecting Subscriptions from the menu on the left.\n",
"\n",
"For the resource_group, use the name of the resource group that contains your Azure Databricks Workspace.\n",
"\n",
"NOTE: If you provide a resource group name that does not exist, the resource group will be automatically created. This may or may not succeed in your environment, depending on the permissions you have on your Azure Subscription."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# subscription_id = \"<your-subscription-id>\"\n",
"# resource_group = \"<your-existing-resource-group>\"\n",
"# workspace_name = \"<a-new-or-existing-workspace; it is unrelated to Databricks workspace>\"\n",
"# workspace_region = \"<your-resource group-region>\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# import the Workspace class and check the azureml SDK version\n",
"# exist_ok checks if workspace exists or not.\n",
"\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" exist_ok=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#get workspace details\n",
"ws.get_details()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()\n",
"##if you need to give a different path/filename please use this\n",
"##write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"help(Workspace)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# import the Workspace class and check the azureml SDK version\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"#ws = Workspace.from_config(<full path>)\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "pasha"
},
{
"name": "wamartin"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"name": "01.Installation_and_Configuration",
"notebookId": 3836944406456490
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -0,0 +1,70 @@
Azure Databricks is a managed Spark offering on Azure and customers already use it for advanced analytics. It provides a collaborative Notebook based environment with CPU or GPU based compute cluster.
In this section, you will see sample notebooks on how to use Azure Machine Learning SDK with Azure Databricks. You can train a model using Spark MLlib and then deploy the model to ACI/AKS from within Azure Databricks. You can also use Automated ML capability (**public preview**) of Azure ML SDK with Azure Databricks.
- Customers who use Azure Databricks for advanced analytics can now use the same cluster to run experiments with or without automated machine learning.
- You can keep the data within the same cluster.
- You can leverage the local worker nodes with autoscale and auto termination capabilities.
- You can use multiple cores of your Azure Databricks cluster to perform simultenous training.
- You can further tune the model generated by automated machine learning if you chose to.
- Every run (including the best run) is available as a pipeline.
- The model trained using Azure Databricks can be registered in Azure ML SDK workspace and then deployed to Azure managed compute (ACI or AKS) using the Azure Machine learning SDK.
**Create Azure Databricks Cluster:**
Select New Cluster and fill in following detail:
- Cluster name: _yourclustername_
- Databricks Runtime: Any 4.x runtime.
- Python version: **3**
- Workers: 2 or higher.
These settings are only for using Automated Machine Learning on Databricks.
- Max. number of **concurrent iterations** in Automated ML settings is **<=** to the number of **worker nodes** in your Databricks cluster.
- Worker node VM types: **Memory optimized VM** preferred.
- Uncheck _Enable Autoscaling_
It will take few minutes to create the cluster. Please ensure that the cluster state is running before proceeding further.
**Install Azure ML SDK without Automated ML capability on your Azure Databricks cluster**
- Select Import library
- Source: Upload Python Egg or PyPI
- PyPi Name: **azureml-sdk[databricks]**
**Install Azure ML with Automated ML SDK on your Azure Databricks cluster**
- Select Import library
- Source: Upload Python Egg or PyPI
- PyPi Name: **azureml-sdk[automl_databricks]**
**For installation with or without Automated ML**
- Click Install Library
- Do not select _Attach automatically to all clusters_. In case you have selected earlier then you can go to your Home folder and deselect it.
- Select the check box _Attach_ next to your cluster name
(More details on how to attach and detach libs are here - [https://docs.databricks.com/user-guide/libraries.html#attach-a-library-to-a-cluster](https://docs.databricks.com/user-guide/libraries.html#attach-a-library-to-a-cluster) )
- Ensure that there are no errors until Status changes to _Attached_. It may take a couple of minutes.
**Note** - If you have the old build the please deselect it from clusters installed libs > move to trash. Install the new build and restart the cluster. And if still there is an issue then detach and reattach your cluster.
iPython Notebooks 1-4 have to be run sequentially after making changes based on your subscription. The corresponding DBC archive contains all the notebooks and can be imported into your Databricks workspace. You can the run notebooks after importing [databricks_amlsdk](Databricks_AMLSDK_1-4_6.dbc) instead of downloading individually.
Notebooks 1-4 are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks. Notebook 6 is an Automated ML sample notebook.
For details on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks).
Learn more about [how to use Azure Databricks as a development environment](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment#azure-databricks) for Azure Machine Learning service.
You can also use Azure Databricks as a compute target for [training models with an Azure Machine Learning pipeline](https://docs.microsoft.com/machine-learning/service/how-to-set-up-training-targets#databricks).
**Please let us know your feedback.**

View File

@@ -0,0 +1,264 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Azure ML & Azure Databricks notebooks by Parashar Shah.\n",
"\n",
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n",
"\n",
"**install azureml-sdk**\n",
"* Source: Upload Python Egg or PyPi\n",
"* PyPi Name: `azureml-sdk[databricks]`\n",
"* Select Install Library"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"# Check core SDK version number - based on build number of preview/master.\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![04ACI](files/tables/image2b.JPG)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please specify the Azure subscription Id, resource group name, workspace name, and the region in which you want to create the Azure Machine Learning Workspace.\n",
"\n",
"You can get the value of your Azure subscription ID from the Azure Portal, and then selecting Subscriptions from the menu on the left.\n",
"\n",
"For the resource_group, use the name of the resource group that contains your Azure Databricks Workspace.\n",
"\n",
"NOTE: If you provide a resource group name that does not exist, the resource group will be automatically created. This may or may not succeed in your environment, depending on the permissions you have on your Azure Subscription."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# subscription_id = \"<your-subscription-id>\"\n",
"# resource_group = \"<your-existing-resource-group>\"\n",
"# workspace_name = \"<a-new-or-existing-workspace; it is unrelated to Databricks workspace>\"\n",
"# workspace_region = \"<your-resource group-region>\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##TESTONLY\n",
"# import auth creds from notebook parameters\n",
"tenant = dbutils.widgets.get('tenant_id')\n",
"username = dbutils.widgets.get('service_principal_id')\n",
"password = dbutils.widgets.get('service_principal_password')\n",
"\n",
"auth = azureml.core.authentication.ServicePrincipalAuthentication(tenant, username, password)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##TESTONLY\n",
"subscription_id = dbutils.widgets.get('subscription_id')\n",
"resource_group = dbutils.widgets.get('resource_group')\n",
"workspace_name = dbutils.widgets.get('workspace_name')\n",
"workspace_region = dbutils.widgets.get('workspace_region')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##TESTONLY\n",
"# import the Workspace class and check the azureml SDK version\n",
"# exist_ok checks if workspace exists or not.\n",
"\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" auth = auth,\n",
" exist_ok=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##PUBLISHONLY\n",
"## import the Workspace class and check the azureml SDK version\n",
"## exist_ok checks if workspace exists or not.\n",
"#\n",
"#from azureml.core import Workspace\n",
"#\n",
"#ws = Workspace.create(name = workspace_name,\n",
"# subscription_id = subscription_id,\n",
"# resource_group = resource_group, \n",
"# location = workspace_region,\n",
"# exist_ok=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#get workspace details\n",
"ws.get_details()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##TESTONLY\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group,\n",
" auth = auth)\n",
"\n",
"# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##PUBLISHONLY\n",
"#ws = Workspace(workspace_name = workspace_name,\n",
"# subscription_id = subscription_id,\n",
"# resource_group = resource_group)\n",
"#\n",
"## persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"#ws.write_config()\n",
"###if you need to give a different path/filename please use this\n",
"###write_config(path=\"/databricks/driver/aml_config/\",file_name=<alias_conf.cfg>)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"help(Workspace)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##TESTONLY\n",
"# import the Workspace class and check the azureml SDK version\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config(auth = auth)\n",
"#ws = Workspace.from_config(<full path>)\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##PUBLISHONLY\n",
"## import the Workspace class and check the azureml SDK version\n",
"#from azureml.core import Workspace\n",
"#\n",
"#ws = Workspace.from_config()\n",
"##ws = Workspace.from_config(<full path>)\n",
"#print('Workspace name: ' + ws.name, \n",
"# 'Azure region: ' + ws.location, \n",
"# 'Subscription id: ' + ws.subscription_id, \n",
"# 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "pasha"
},
{
"name": "wamartin"
}
],
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"name": "01.Installation_and_Configuration",
"notebookId": 3836944406456490
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -9,6 +9,18 @@
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n",
"\n",
"**install azureml-sdk with Automated ML**\n",
"* Source: Upload Python Egg or PyPi\n",
"* PyPi Name: `azureml-sdk[automl_databricks]`\n",
"* Select Install Library"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -398,11 +410,11 @@
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**spark_context**|Spark Context object. for Databricks, use spark_context=sc|\n",
"|**max_cuncurrent_iterations**|Maximum number of iterations to execute in parallel. This should be less than the number of cores on the ADB..|\n",
"|**max_concurrent_iterations**|Maximum number of iterations to execute in parallel. This should be <= number of worker nodes in your Azure Databricks cluster.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
"|**concurrent_iterations**|number of concurrent runs <= total cores in all worker nodes in your Databricks cluster|\n",
"|**preprocess**|set this to True to enable pre-processing of data eg. string to numeric using one-hot encoding|\n",
"|**exit_score**|Target score for experiment. It is associated with the metric. eg. exit_score=0.995 will exit experiment after that|"
]
},
@@ -418,7 +430,7 @@
" iteration_timeout_minutes = 10,\n",
" iterations = 30,\n",
" n_cross_validations = 10,\n",
" max_concurrent_iterations = 8, #change it based on number of cores in worker nodes\n",
" max_concurrent_iterations = 2, #change it based on number of worker nodes\n",
" verbosity = logging.INFO,\n",
" spark_context=sc, #databricks/spark related\n",
" X = X_train, \n",
@@ -592,6 +604,13 @@
" display(fig)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When deploying an automated ML trained model, please specify _pip_packages=['azureml-sdk[automl]']_ in your CondaDependencies."
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -1,47 +0,0 @@
**PREVIEW capability**
Automated ML now supports Azure Databricks as a local compute to perform training (**public preview**). Azure Databricks is a managed Spark offering on Azure and customers already use it for advanced analytics. It provides a collaborative Notebook based environment with CPU or GPU based compute cluster.
- Customers who use Azure Databricks for advanced analytics can now use the same cluster to run automated machine learning experiments.
- You can keep the data within the same cluster.
- You can leverage the local worker nodes with autoscale and auto termination capabilities.
- You can use multiple cores of your Azure Databricks cluster to perform simultenous training.
- You can further tune the model generated by automated machine learning if you chose to.
- Every run (including the best run) is available as a pipeline.
- The model from the pipeline can be registered in Azure ML SDK workspace and then deployed to Azure managed compute (ACI or AKS) using the Azure Machine learning SDK.
**Create Azure Databricks Cluster:**
Select New Cluster and fill in following detail:
- Cluster name: _yourclustername_
- Cluster Mode: Any. **High Concurrency** preferred
- Databricks Runtime: Any 4.x runtime.
- Python version: **3**
- Workers: 2 or higher.
- Max. number of **concurrent iterations** in Automated ML settings is **<=** to the number of **worker nodes** in your Databricks cluster.
- Worker node VM types: **Memory optimized VM** preferred.
- Uncheck _Enable Autoscaling_
It will take few minutes to create the cluster. Please ensure that the cluster state is running before proceeding further.
**Install Azure ML with Automated ML SDK on your Azure Databricks cluster**
- Select Import library
- Source: Upload Python Egg or PyPI
- PyPi Name: **azureml-sdk[automl_databricks]**
- Click Install Library
- Do not select _Attach automatically to all clusters_. In case you have selected earlier then you can go to your Home folder and deselect it.
- Select the check box _Attach_ next to your cluster name
(More details on how to attach and detach libs are here - [https://docs.databricks.com/user-guide/libraries.html#attach-a-library-to-a-cluster](https://docs.databricks.com/user-guide/libraries.html#attach-a-library-to-a-cluster) )
- Ensure that there are no errors until Status changes to _Attached_. It may take a couple of minutes.
**Note** - If you have the old build the please deselect it from clusters installed libs > move to trash. Install the new build and restart the cluster. And if still there is an issue then detach and reattach your cluster.
**Now you can run the Automated ML sample notebook on your Azure Databricks cluster. Please let us know your feedback.**

View File

@@ -52,8 +52,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Set up your configuration and create a workspace\n",
"Follow Notebook 00 instructions to do this.\n"
"## 2. Set up your configuration and create a workspace\n"
]
},
{
@@ -103,8 +102,7 @@
"\n",
"### b. In your run function add:\n",
"```python\n",
"print (\"saving input data\" + time.strftime(\"%H:%M:%S\"))\n",
"print (\"saving prediction data\" + time.strftime(\"%H:%M:%S\"))```"
"print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))```"
]
},
{
@@ -120,7 +118,6 @@
"from sklearn.externals import joblib\n",
"from sklearn.linear_model import Ridge\n",
"from azureml.core.model import Model\n",
"from azureml.monitoring import ModelDataCollector\n",
"import time\n",
"\n",
"def init():\n",
@@ -134,34 +131,16 @@
" \n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
" \n",
" global inputs_dc, prediction_dc\n",
" \n",
" # this setup will help us save our inputs under the \"inputs\" path in our Azure Blob\n",
" inputs_dc = ModelDataCollector(model_name=\"sklearn_regression_model\", identifier=\"inputs\", feature_names=[\"feat1\", \"feat2\"]) \n",
" \n",
" # this setup will help us save our ipredictions under the \"predictions\" path in our Azure Blob\n",
" prediction_dc = ModelDataCollector(\"sklearn_regression_model\", identifier=\"predictions\", feature_names=[\"prediction1\", \"prediction2\"]) \n",
" \n",
"\n",
"# note you can pass in multiple rows for scoring\n",
"def run(raw_data):\n",
" global inputs_dc, prediction_dc\n",
" try:\n",
" data = json.loads(raw_data)['data']\n",
" data = numpy.array(data)\n",
" result = model.predict(data)\n",
" \n",
" #Print statement for appinsights custom traces:\n",
" print (\"saving input data\" + time.strftime(\"%H:%M:%S\"))\n",
" \n",
" #this call is saving our input data into our blob\n",
" inputs_dc.collect(data) \n",
" #this call is saving our prediction data into our blob\n",
" prediction_dc.collect(result)\n",
" \n",
" #Print statement for appinsights custom traces:\n",
" print (\"saving prediction data\" + time.strftime(\"%H:%M:%S\"))\n",
" # you can return any data type as long as it is JSON-serializable\n",
" print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))\n",
" # you can return any datatype as long as it is JSON-serializable\n",
" return result.tolist()\n",
" except Exception as e:\n",
" error = str(e)\n",
@@ -221,6 +200,75 @@
"image.wait_for_creation(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy to ACI (Optional)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
" memory_gb = 1, \n",
" tags = {'area': \"diabetes\", 'type': \"regression\"}, \n",
" description = 'Predict diabetes using regression model',\n",
" enable_app_insights = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'my-aci-service-4'\n",
"print(aci_service_name)\n",
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
" image = image,\n",
" name = aci_service_name,\n",
" workspace = ws)\n",
"aci_service.wait_for_deployment(True)\n",
"print(aci_service.state)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"import json\n",
"\n",
"test_sample = json.dumps({'data': [\n",
" [1,28,13,45,54,6,57,8,8,10], \n",
" [101,9,8,37,6,45,4,3,2,41]\n",
"]})\n",
"test_sample = bytes(test_sample,encoding='utf8')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if aci_service.state == \"Healthy\":\n",
" prediction = aci_service.run(input_data=test_sample)\n",
" print(prediction)\n",
"else:\n",
" raise ValueError(\"Service deployment isn't healthy, can't call the service\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -232,7 +280,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AKS compute if you haven't done so (Notebook 11)"
"### Create AKS compute if you haven't done so."
]
},
{
@@ -244,7 +292,7 @@
"# Use the default configuration (can also provide parameters to customize)\n",
"prov_config = AksCompute.provisioning_configuration()\n",
"\n",
"aks_name = 'my-aks-test2' \n",
"aks_name = 'my-aks-test3' \n",
"# Create the cluster\n",
"aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
@@ -258,7 +306,15 @@
"outputs": [],
"source": [
"%%time\n",
"aks_target.wait_for_completion(show_output = True)\n",
"aks_target.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(aks_target.provisioning_state)\n",
"print(aks_target.provisioning_errors)"
]
@@ -317,17 +373,18 @@
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='aks-w-dc3'\n",
"\n",
"aks_service = Webservice.deploy_from_image(workspace = ws, \n",
" name = aks_service_name,\n",
" image = image,\n",
" deployment_config = aks_config,\n",
" deployment_target = aks_target\n",
" )\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
"if aks_target.provisioning_state== \"Succeeded\": \n",
" aks_service_name ='aks-w-dc5'\n",
" aks_service = Webservice.deploy_from_image(workspace = ws, \n",
" name = aks_service_name,\n",
" image = image,\n",
" deployment_config = aks_config,\n",
" deployment_target = aks_target\n",
" )\n",
" aks_service.wait_for_deployment(show_output = True)\n",
" print(aks_service.state)\n",
"else:\n",
" raise ValueError(\"AKS provisioning failed.\")"
]
},
{
@@ -352,8 +409,11 @@
"]})\n",
"test_sample = bytes(test_sample,encoding='utf8')\n",
"\n",
"prediction = aks_service.run(input_data=test_sample)\n",
"print(prediction)"
"if aks_service.state == \"Healthy\":\n",
" prediction = aks_service.run(input_data=test_sample)\n",
" print(prediction)\n",
"else:\n",
" raise ValueError(\"Service deployment isn't healthy, can't call the service\")"
]
},
{
@@ -384,6 +444,26 @@
"source": [
"aks_service.update(enable_app_insights=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service.delete()\n",
"aci_service.delete()\n",
"image.delete()\n",
"model.delete()"
]
}
],
"metadata": {
@@ -393,9 +473,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python [default]",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -407,7 +487,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.5"
}
},
"nbformat": 4,

View File

@@ -237,7 +237,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create AKS compute if you haven't done so (Notebook 11)"
"### Create AKS compute if you haven't done so."
]
},
{
@@ -324,17 +324,18 @@
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='aks-w-dc2'\n",
"\n",
"aks_service = Webservice.deploy_from_image(workspace = ws, \n",
" name = aks_service_name,\n",
" image = image,\n",
" deployment_config = aks_config,\n",
" deployment_target = aks_target\n",
" )\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
"if aks_target.provisioning_state== \"Succeeded\": \n",
" aks_service_name ='aks-w-dc0'\n",
" aks_service = Webservice.deploy_from_image(workspace = ws, \n",
" name = aks_service_name,\n",
" image = image,\n",
" deployment_config = aks_config,\n",
" deployment_target = aks_target\n",
" )\n",
" aks_service.wait_for_deployment(show_output = True)\n",
" print(aks_service.state)\n",
"else: \n",
" raise ValueError(\"aks provisioning failed, can't deploy service\")"
]
},
{
@@ -363,8 +364,11 @@
"]})\n",
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
"\n",
"prediction = aks_service.run(input_data = test_sample)\n",
"print(prediction)"
"if aks_service.state == \"Healthy\":\n",
" prediction = aks_service.run(input_data=test_sample)\n",
" print(prediction)\n",
"else:\n",
" raise ValueError(\"Service deployment isn't healthy, can't call the service\")"
]
},
{
@@ -423,6 +427,25 @@
"source": [
"aks_service.update(collect_model_data=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service.delete()\n",
"image.delete()\n",
"model.delete()"
]
}
],
"metadata": {
@@ -432,9 +455,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python [default]",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -446,7 +469,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.5"
}
},
"nbformat": 4,

View File

@@ -1,14 +1,20 @@
# ONNX on Azure Machine Learning
# ONNX on Azure Machine Learning
These tutorials show how to create and deploy [ONNX](http://onnx.ai) models in Azure Machine Learning environments using [ONNX Runtime](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx) for inference. Once deployed as a web service, you can ping the model with your own set of images to be analyzed!
These tutorials show how to create and deploy Open Neural Network eXchange ([ONNX](http://onnx.ai)) models in Azure Machine Learning environments using [ONNX Runtime](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx) for inference. Once deployed as a web service, you can ping the model with your own set of images to be analyzed!
## Tutorials
- [Obtain ONNX model from ONNX Model Zoo and deploy with ONNX Runtime inference - Handwritten Digit Classification (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/onnx/onnx-inference-mnist-deploy.ipynb)
- [Obtain ONNX model from ONNX Model Zoo and deploy with ONNX Runtime inference - Facial Expression Recognition (Emotion FER+)](https://github.com/Azure/MachineLearningNotebooks/blob/master/onnx/onnx-inference-facial-emotion-recognition-deploy.ipynb)
- [Obtain ONNX model from ONNX Model Zoo and deploy with ONNX Runtime inference - Image Recognition (ResNet50)](https://github.com/Azure/MachineLearningNotebooks/blob/master/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb)
- [Convert ONNX model from CoreML and deploy - TinyYOLO](https://github.com/Azure/MachineLearningNotebooks/blob/master/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb)
- [Train ONNX model in PyTorch and deploy - MNIST](https://github.com/Azure/MachineLearningNotebooks/blob/master/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb)
0. [Configure your Azure Machine Learning Workspace](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb)
#### Obtain models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime Inference
1. [Handwritten Digit Classification (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb)
2. [Facial Expression Recognition (Emotion FER+)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb)
#### Demo Notebooks from Microsoft Ignite 2018
Note that the following notebooks do not have evaluation sections for the models since they were deployed as part of a live demo. You can find the respective pre-processing and post-processing code linked from the ONNX Model Zoo Github pages ([ResNet](https://github.com/onnx/models/tree/master/models/image_classification/resnet), [TinyYoloV2](https://github.com/onnx/models/tree/master/tiny_yolov2)), or experiment with the ONNX models by [running them in the browser](https://microsoft.github.io/onnxjs-demo/#/).
3. [Image Recognition (ResNet50)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb)
4. [Convert Core ML Model to ONNX and deploy - Real Time Object Detection (TinyYOLO)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb)
## Documentation
- [ONNX Runtime Python API Documentation](http://aka.ms/onnxruntime-python)
@@ -19,7 +25,6 @@ These tutorials show how to create and deploy [ONNX](http://onnx.ai) models in A
- [Azure AI Making AI Real for Business](https://aka.ms/aml-blog-overview)
- [Whats new in Azure Machine Learning](https://aka.ms/aml-blog-whats-new)
## License
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

View File

@@ -255,25 +255,36 @@
" input_name = session.get_inputs()[0].name\n",
" output_name = session.get_outputs()[0].name \n",
" \n",
"\n",
"def preprocess(input_data_json):\n",
" # convert the JSON data into the tensor input\n",
" return np.array(json.loads(input_data_json)['data']).astype('float32')\n",
"\n",
"def postprocess(result):\n",
" # We use argmax to pick the highest confidence label\n",
" return int(np.argmax(np.array(result).squeeze(), axis=0))\n",
" \n",
"def run(input_data):\n",
" '''Purpose: evaluate test input in Azure Cloud using onnxruntime.\n",
" We will call the run function later from our Jupyter Notebook \n",
" so our azure service can evaluate our model input in the cloud. '''\n",
"\n",
" try:\n",
" # load in our data, convert to readable format\n",
" data = np.array(json.loads(input_data)['data']).astype('float32')\n",
"\n",
" data = preprocess(input_data)\n",
" \n",
" # start timer\n",
" start = time.time()\n",
" r = session.run([output_name], {input_name: data})[0]\n",
" \n",
" r = session.run([output_name], {input_name: data})\n",
" \n",
" #end timer\n",
" end = time.time()\n",
" result = choose_class(r[0])\n",
" result_dict = {\"result\": [result],\n",
" \"time_in_sec\": [end - start]}\n",
" \n",
" result = postprocess(r)\n",
" result_dict = {\"result\": result,\n",
" \"time_in_sec\": end - start}\n",
" except Exception as e:\n",
" result_dict = {\"error\": str(e)}\n",
" \n",
" return json.dumps(result_dict)\n",
" return result_dict\n",
"\n",
"def choose_class(result_prob):\n",
" \"\"\"We use argmax to determine the right label to choose from our output\"\"\"\n",
@@ -423,7 +434,16 @@
"\n",
"If you've made it this far, you've deployed a working VM with a handwritten digit classifier running in the cloud using Azure ML. Congratulations!\n",
"\n",
"Let's see how well our model deals with our test images."
"You can get the URL for the webservice with the code below. Let's now see how well our model deals with our test images."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(aci_service.scoring_uri)"
]
},
{
@@ -544,14 +564,14 @@
" input_data = json.dumps({'data': test_inputs[i].tolist()})\n",
" \n",
" # predict using the deployed model\n",
" r = json.loads(aci_service.run(input_data))\n",
" r = aci_service.run(input_data)\n",
" \n",
" if \"error\" in r:\n",
" print(r['error'])\n",
" break\n",
" \n",
" result = r['result'][0]\n",
" time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n",
" result = r['result']\n",
" time_ms = np.round(r['time_in_sec'] * 1000, 2)\n",
" \n",
" ground_truth = int(np.argmax(test_outputs[i]))\n",
" \n",
@@ -658,9 +678,9 @@
" input_data = json.dumps({'data': img.tolist()})\n",
"\n",
" try:\n",
" r = json.loads(aci_service.run(input_data))\n",
" result = r['result'][0]\n",
" time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n",
" r = aci_service.run(input_data)\n",
" result = r['result']\n",
" time_ms = np.round(r['time_in_sec'] * 1000, 2)\n",
" except Exception as e:\n",
" print(str(e))\n",
"\n",
@@ -783,7 +803,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.5"
},
"msauthor": "vinitra.swamy"
},

View File

@@ -8,6 +8,8 @@ The Python-based Azure Machine Learning Pipeline SDK provides interfaces to work
Data management and reuse across pipelines and pipeline runs is simplified using named and strictly versioned data sources and named inputs and outputs for processing tasks. Pipelines enable collaboration across teams of data scientists by recording all intermediate tasks and data.
Learn more about how to [create your first machine learning pipeline](https://docs.microsoft.com/azure/machine-learning/service/how-to-create-your-first-pipeline).
### Why build pipelines?
With pipelines, you can optimize your workflow with simplicity, speed, portability, and reuse. When building pipelines with Azure Machine Learning, you can focus on what you know best — machine learning — rather than infrastructure.
@@ -41,9 +43,9 @@ In this directory, there are two types of notebooks:
3. [aml-pipelines-publish-and-run-using-rest-endpoint.ipynb](https://aka.ms/pl-pub-rep)
4. [aml-pipelines-data-transfer.ipynb](https://aka.ms/pl-data-trans)
5. [aml-pipelines-use-databricks-as-compute-target.ipynb](https://aka.ms/pl-databricks)
6. [aml-pipelines-use-adla-as-compute-target.ipynb] (https://aka.ms/pl-adla)
6. [aml-pipelines-use-adla-as-compute-target.ipynb](https://aka.ms/pl-adla)
* The second type of notebooks illustrate more sophisticated scenarios, and are independent of each other. These notebooks include:
1. [pipeline-batch-scoring.ipynb](https://aka.ms/pl-batch-score)
2. [pipeline-style-transfer.ipynb] (https://aka.ms/pl-style-trans)
2. [pipeline-style-transfer.ipynb](https://aka.ms/pl-style-trans)

View File

@@ -101,9 +101,9 @@
"\n",
"workspace = ws.name\n",
"datastore_name='MyAdlsDatastore'\n",
"subscription_id=os.getenv(\"ADL_SUBSCRIPTION_62\" \"<my-subscription-id>\"), # subscription id of ADLS account\n",
"resource_group=os.getenv(\"ADL_RESOURCE_GROUP_62\" \"<my-resource-group>\"), # resource group of ADLS account\n",
"store_name=os.getenv(\"ADL_STORENAME_62\", \"<my-datastore-name>\"), # ADLS account name\n",
"subscription_id=os.getenv(\"ADL_SUBSCRIPTION_62\", \"<my-subscription-id>\") # subscription id of ADLS account\n",
"resource_group=os.getenv(\"ADL_RESOURCE_GROUP_62\", \"<my-resource-group>\") # resource group of ADLS account\n",
"store_name=os.getenv(\"ADL_STORENAME_62\", \"<my-datastore-name>\") # ADLS account name\n",
"tenant_id=os.getenv(\"ADL_TENANT_62\", \"<my-tenant-id>\") # tenant id of service principal\n",
"client_id=os.getenv(\"ADL_CLIENTID_62\", \"<my-client-id>\") # client id of service principal\n",
"client_secret=os.getenv(\"ADL_CLIENT_SECRET_62\", \"<my-client-secret>\") # the secret of service principal\n",
@@ -201,7 +201,7 @@
" print('Data factory not found, creating...')\n",
" provisioning_config = DataFactoryCompute.provisioning_configuration()\n",
" data_factory = ComputeTarget.create(workspace, factory_name, provisioning_config)\n",
" data_factory.wait_for_provisioning()\n",
" data_factory.wait_for_completion()\n",
" return data_factory\n",
" else:\n",
" raise e\n",
@@ -310,9 +310,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

View File

@@ -584,9 +584,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

View File

@@ -346,9 +346,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

View File

@@ -94,9 +94,9 @@
"\n",
"workspace = ws.name\n",
"datastore_name='MyAdlsDatastore'\n",
"subscription_id=os.getenv(\"ADL_SUBSCRIPTION_62\" \"<my-subscription-id>\"), # subscription id of ADLS account\n",
"resource_group=os.getenv(\"ADL_RESOURCE_GROUP_62\" \"<my-resource-group>\"), # resource group of ADLS account\n",
"store_name=os.getenv(\"ADL_STORENAME_62\", \"<my-datastore-name>\"), # ADLS account name\n",
"subscription_id=os.getenv(\"ADL_SUBSCRIPTION_62\", \"<my-subscription-id>\") # subscription id of ADLS account\n",
"resource_group=os.getenv(\"ADL_RESOURCE_GROUP_62\", \"<my-resource-group>\") # resource group of ADLS account\n",
"store_name=os.getenv(\"ADL_STORENAME_62\", \"<my-datastore-name>\") # ADLS account name\n",
"tenant_id=os.getenv(\"ADL_TENANT_62\", \"<my-tenant-id>\") # tenant id of service principal\n",
"client_id=os.getenv(\"ADL_CLIENTID_62\", \"<my-client-id>\") # client id of service principal\n",
"client_secret=os.getenv(\"ADL_CLIENT_62_SECRET\", \"<my-client-secret>\") # the secret of service principal\n",
@@ -229,12 +229,10 @@
"\n",
"### Remarks\n",
"\n",
"You can use `@@name@@` syntax in your script to refer to inputs, outputs, resources, and params.\n",
"You can use `@@name@@` syntax in your script to refer to inputs, outputs, and params.\n",
"\n",
"* if `name` is the name of an input or output port binding, any occurences of `@@name@@` in the script\n",
"are replaced with actual data path of corresponding port binding.\n",
"* if `name` is the name of a resource input port binding, any occurences of `@@name@@` in the script\n",
"are replaced with local path of resource after it's downloaded to script directory on a worker node.\n",
"* if `name` matches any key in `params` dict, any occurences of `@@name@@` will be replaced with\n",
"corresponding value in dict.\n",
"\n",
@@ -348,9 +346,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python [default]",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -362,7 +360,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
"version": "3.6.6"
}
},
"nbformat": 4,

View File

@@ -127,7 +127,9 @@
"\n",
"- **Resource Group** - The resource group name of your Azure Machine Learning workspace\n",
"- **Databricks Workspace Name** - The workspace name of your Azure Databricks workspace\n",
"- **Databricks Access Token** - The access token you created in ADB"
"- **Databricks Access Token** - The access token you created in ADB\n",
"\n",
"**The Databricks workspace need to be present in the same subscription as your AML workspace**"
]
},
{
@@ -312,33 +314,73 @@
"- ***name:** Name of the Module\n",
"- **inputs:** List of input connections for data consumed by this step. Fetch this inside the notebook using dbutils.widgets.get(\"input\")\n",
"- **outputs:** List of output port definitions for outputs produced by this step. Fetch this inside the notebook using dbutils.widgets.get(\"output\")\n",
"- **existing_cluster_id:** Cluster ID of an existing Interactive cluster on the Databricks workspace. If you are providing this, do not provide any of the parameters below that are used to create a new cluster such as spark_version, node_type, etc.\n",
"- **spark_version:** Version of spark for the databricks run cluster. default value: 4.0.x-scala2.11\n",
"- **node_type:** Azure vm node types for the databricks run cluster. default value: Standard_D3_v2\n",
"- **num_workers:** Number of workers for the databricks run cluster\n",
"- **autoscale:** The autoscale configuration for the databricks run cluster\n",
"- **spark_env_variables:** Spark environment variables for the databricks run cluster (dictionary of {str:str}). default value: {'PYSPARK_PYTHON': '/databricks/python3/bin/python3'}\n",
"- ***notebook_path:** Path to the notebook in the databricks instance.\n",
"- **notebook_path:** Path to the notebook in the databricks instance. If you are providing this, do not provide python script related paramaters or JAR related parameters.\n",
"- **notebook_params:** Parameters for the databricks notebook (dictionary of {str:str}). Fetch this inside the notebook using dbutils.widgets.get(\"myparam\")\n",
"- **python_script_path:** The path to the python script in the DBFS or S3. If you are providing this, do not provide python_script_name which is used for uploading script from local machine.\n",
"- **python_script_params:** Parameters for the python script (list of str)\n",
"- **main_class_name:** The name of the entry point in a JAR module. If you are providing this, do not provide any python script or notebook related parameters.\n",
"- **jar_params:** Parameters for the JAR module (list of str)\n",
"- **python_script_name:** name of a python script on your local machine (relative to source_directory). If you are providing this do not provide python_script_path which is used to execute a remote python script; or any of the JAR or notebook related parameters.\n",
"- **source_directory:** folder that contains the script and other files\n",
"- **hash_paths:** list of paths to hash to detect a change in source_directory (script file is always hashed)\n",
"- **run_name:** Name in databricks for this run\n",
"- **timeout_seconds:** Timeout for the databricks run\n",
"- **runconfig:** Runconfig to use. Either pass runconfig or each library type as a separate parameter but do not mix the two\n",
"- **maven_libraries:** maven libraries for the databricks run\n",
"- **pypi_libraries:** pypi libraries for the databricks run\n",
"- **egg_libraries:** egg libraries for the databricks run\n",
"- **jar_libraries:** jar libraries for the databricks run\n",
"- **rcran_libraries:** rcran libraries for the databricks run\n",
"- **databricks_compute:** Azure Databricks compute\n",
"- **databricks_compute_name:** Name of Azure Databricks compute\n",
"- **compute_target:** Azure Databricks compute\n",
"- **allow_reuse:** Whether the step should reuse previous results when run with the same settings/inputs\n",
"- **version:** Optional version tag to denote a change in functionality for the step\n",
"\n",
"\\* *denotes required fields* \n",
"*You must provide exactly one of num_workers or autoscale paramaters* \n",
"*You must provide exactly one of databricks_compute or databricks_compute_name parameters*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='notebook_howto'></a>"
"*You must provide exactly one of databricks_compute or databricks_compute_name parameters*\n",
"\n",
"## Use runconfig to specify library dependencies\n",
"You can use a runconfig to specify the library dependencies for your cluster in Databricks. The runconfig will contain a databricks section as follows:\n",
"```yaml\n",
"environment:\n",
"# Databricks details\n",
" databricks:\n",
"# List of maven libraries.\n",
" mavenLibraries:\n",
" - coordinates: org.jsoup:jsoup:1.7.1\n",
" repo: ''\n",
" exclusions:\n",
" - slf4j:slf4j\n",
" - '*:hadoop-client'\n",
"# List of PyPi libraries\n",
" pypiLibraries:\n",
" - package: beautifulsoup4\n",
" repo: ''\n",
"# List of RCran libraries\n",
" rcranLibraries:\n",
" - package: ada\n",
" repo: http://cran.us.r-project.org\n",
"# List of JAR libraries\n",
" jarLibraries:\n",
" - library: dbfs:/mnt/libraries/library.jar\n",
"# List of Egg libraries\n",
" eggLibraries:\n",
" - library: dbfs:/mnt/libraries/library.egg\n",
"```\n",
"\n",
"You can then create a RunConfiguration object using this file and pass it as the runconfig parameter to DatabricksStep.\n",
"```python\n",
"from azureml.core.runconfig import RunConfiguration\n",
"\n",
"runconfig = RunConfiguration()\n",
"runconfig.load(path='<directory_where_runconfig_is_stored>', name='<runconfig_file_name>')\n",
"```"
]
},
{
@@ -383,10 +425,11 @@
"metadata": {},
"outputs": [],
"source": [
"steps = [dbNbStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
"#PUBLISHONLY\n",
"#steps = [dbNbStep]\n",
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
"#pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline)\n",
"#pipeline_run.wait_for_completion()"
]
},
{
@@ -402,8 +445,9 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
"#PUBLISHONLY\n",
"#from azureml.widgets import RunDetails\n",
"#RunDetails(pipeline_run).show()"
]
},
{
@@ -453,10 +497,11 @@
"metadata": {},
"outputs": [],
"source": [
"steps = [dbPythonInDbfsStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
"#PUBLISHONLY\n",
"#steps = [dbPythonInDbfsStep]\n",
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
"#pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline)\n",
"#pipeline_run.wait_for_completion()"
]
},
{
@@ -472,8 +517,9 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
"#PUBLISHONLY\n",
"#from azureml.widgets import RunDetails\n",
"#RunDetails(pipeline_run).show()"
]
},
{
@@ -594,10 +640,11 @@
"metadata": {},
"outputs": [],
"source": [
"steps = [dbJarInDbfsStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
"#PUBLISHONLY\n",
"#steps = [dbJarInDbfsStep]\n",
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
"#pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline)\n",
"#pipeline_run.wait_for_completion()"
]
},
{
@@ -613,8 +660,9 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
"#PUBLISHONLY\n",
"#from azureml.widgets import RunDetails\n",
"#RunDetails(pipeline_run).show()"
]
},
{
@@ -633,9 +681,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -647,7 +695,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
"version": "3.6.2"
}
},
"nbformat": 4,

View File

@@ -396,9 +396,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

View File

@@ -551,9 +551,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

View File

@@ -588,9 +588,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

View File

@@ -3,4 +3,6 @@
These examples show you:
* Distributed training of models on Machine Learning Compute cluster
* Hyperparameter tuning at scale
* Using Tensorboard with Azure ML Python SDK.
* Using Tensorboard with Azure ML Python SDK.
Learn more about how to use `Estimator` class to [train deep neural networks with Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-ml-models).

View File

@@ -251,26 +251,6 @@
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI/Horovod, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, PyTorch, Horovod and their dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `PyTorch` constructor's `pip_packages` or `conda_packages` parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use the latest version of PyTorch 1.0, run the following cell:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimator.conda_dependencies.remove_conda_package('pytorch=0.4.0')\n",
"estimator.conda_dependencies.remove_pip_package('horovod==0.13.11')\n",
"estimator.conda_dependencies.add_conda_package('pytorch-nightly')\n",
"estimator.conda_dependencies.add_channel('pytorch')\n",
"estimator.conda_dependencies.add_pip_package('horovod==0.15.2')"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -1,3 +1,8 @@
# Copyright (c) 2017, PyTorch contributors
# Modifications copyright (C) Microsoft Corporation
# Licensed under the BSD license
# Adapted from https://github.com/uber/horovod/blob/master/examples/pytorch_mnist.py
from __future__ import print_function
import argparse
import torch.nn as nn
@@ -11,6 +16,8 @@ from azureml.core.run import Run
# get the Azure ML run object
run = Run.get_context()
print("Torch version:", torch.__version__)
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',

View File

@@ -91,10 +91,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an `AmlCompute` cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
"\n",
"**Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process."
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{

View File

@@ -91,10 +91,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an `AmlCompute` cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
"\n",
"**Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process."
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
"\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{

View File

@@ -278,7 +278,7 @@
"\n",
"If you are unfamiliar with DSVM configuration, check [04. Train in a remote VM](04.train-on-remote-vm.ipynb) for a more detailed breakdown.\n",
"\n",
"**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node AmlCompute. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note that we only support Linux VMs and the commands below will spin a Linux VM only.\n",
"**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.\n",
"\n",
"```shell\n",
"# create a DSVM in your resource group\n",
@@ -294,19 +294,27 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.compute import RemoteCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"import os\n",
"\n",
"username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')\n",
"address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')\n",
"\n",
"compute_target_name = 'cpudsvm'\n",
"\n",
"# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase \n",
"try:\n",
" compute_target = DsvmCompute(workspace=ws, name=compute_target_name)\n",
" print('found existing:', compute_target.name)\n",
" attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)\n",
" print('found existing:', attached_dsvm_compute.name)\n",
"except ComputeTargetException:\n",
" print('creating new.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size=\"Standard_D2_v2\")\n",
" compute_target = DsvmCompute.create(ws, name=compute_target_name, provisioning_configuration=dsvm_config)\n",
"compute_target.wait_for_completion(show_output=True)"
" attached_dsvm_compute = RemoteCompute.attach(workspace=ws,\n",
" name=compute_target_name,\n",
" username=username,\n",
" address=address,\n",
" ssh_port=22,\n",
" private_key_file='./.ssh/id_rsa')\n",
" \n",
" attached_dsvm_compute.wait_for_completion(show_output=True)"
]
},
{
@@ -332,7 +340,7 @@
"# script_params[\"--max_steps\"] = \"5000\"\n",
"\n",
"tf_estimator = TensorFlow(source_directory=exp_dir,\n",
" compute_target=compute_target,\n",
" compute_target=attached_dsvm_compute,\n",
" entry_script='mnist_with_summaries.py',\n",
" script_params=script_params)\n",
"\n",

View File

@@ -182,6 +182,8 @@ def download_data():
def main():
print("Torch version:", torch.__version__)
# get command-line arguments
parser = argparse.ArgumentParser()
parser.add_argument('--num_epochs', type=int, default=25,

View File

@@ -15,7 +15,7 @@
"source": [
"# Train, hyperparameter tune, and deploy with PyTorch\n",
"\n",
"In this tutorial, you will train, hyperparameter tune, and deploy a PyTorch model using the Azure Machine Learning (AML) Python SDK.\n",
"In this tutorial, you will train, hyperparameter tune, and deploy a PyTorch model using the Azure Machine Learning (Azure ML) Python SDK.\n",
"\n",
"This tutorial will train an image classification model using transfer learning, based on PyTorch's [Transfer Learning tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). The model is trained to classify ants and bees by first using a pretrained ResNet18 model that has been trained on the [ImageNet](http://image-net.org/index) dataset."
]
@@ -25,10 +25,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
"* Go through the [Configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`"
]
},
{
@@ -93,10 +90,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an `AmlCompute` cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource.\n",
"\n",
"**Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process."
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace, this code will skip the creation process.\n",
"\n",
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
@@ -176,11 +175,11 @@
"metadata": {},
"source": [
"### Prepare training script\n",
"Now you will need to create your training script. In this tutorial, the training script is already provided for you at `pytorch_train.py`. In practice, you should be able to take any custom training script as is and run it with AML without having to modify your code.\n",
"Now you will need to create your training script. In this tutorial, the training script is already provided for you at `pytorch_train.py`. In practice, you should be able to take any custom training script as is and run it with Azure ML without having to modify your code.\n",
"\n",
"However, if you would like to use AML's [tracking and metrics](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#metrics) capabilities, you will have to add a small amount of AML code inside your training script. \n",
"However, if you would like to use Azure ML's [tracking and metrics](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#metrics) capabilities, you will have to add a small amount of Azure ML code inside your training script. \n",
"\n",
"In `pytorch_train.py`, we will log some metrics to our AML run. To do so, we will access the AML run object within the script:\n",
"In `pytorch_train.py`, we will log some metrics to our Azure ML run. To do so, we will access the Azure ML `Run` object within the script:\n",
"```Python\n",
"from azureml.core.run import Run\n",
"run = Run.get_context()\n",
@@ -238,7 +237,7 @@
"metadata": {},
"source": [
"### Create a PyTorch estimator\n",
"The AML SDK's PyTorch estimator enables you to easily submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch). The following code will define a single-node PyTorch job."
"The Azure ML SDK's PyTorch estimator enables you to easily submit PyTorch training jobs for both single-node and distributed runs. For more information on the PyTorch estimator, refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-train-pytorch). The following code will define a single-node PyTorch job."
]
},
{
@@ -267,7 +266,7 @@
"source": [
"The `script_params` parameter is a dictionary containing the command-line arguments to your training script `entry_script`. Please note the following:\n",
"- We passed our training data reference `ds_data` to our script's `--data_dir` argument. This will 1) mount our datastore on the remote compute and 2) provide the path to the training data `hymenoptera_data` on our datastore.\n",
"- We specified the output directory as `./outputs`. The `outputs` directory is specially treated by AML in that all the content in this directory gets uploaded to your workspace as part of your run history. The files written to this directory are therefore accessible even once your remote run is over. In this tutorial, we will save our trained model to this output directory.\n",
"- We specified the output directory as `./outputs`. The `outputs` directory is specially treated by Azure ML in that all the content in this directory gets uploaded to your workspace as part of your run history. The files written to this directory are therefore accessible even once your remote run is over. In this tutorial, we will save our trained model to this output directory.\n",
"\n",
"To leverage the Azure VM's GPU for training, we set `use_gpu=True`."
]
@@ -506,7 +505,7 @@
"metadata": {},
"source": [
"### Create environment file\n",
"Then, we will need to create an environment file (`myenv.yml`) that specifies all of the scoring script's package dependencies. This file is used to ensure that all of those dependencies are installed in the Docker image by AML. In this case, we need to specify `azureml-core`, `torch` and `torchvision`."
"Then, we will need to create an environment file (`myenv.yml`) that specifies all of the scoring script's package dependencies. This file is used to ensure that all of those dependencies are installed in the Docker image by Azure ML. In this case, we need to specify `azureml-core`, `torch` and `torchvision`."
]
},
{

View File

@@ -258,15 +258,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an `AmlCompute` cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace."
"## Create or Attach existing AmlCompute\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we could not find the cluster with the given name in the previous cell, then we will create a new cluster here. We will create an `AmlCompute` cluster of `STANDARD_NC6` GPU VMs. This process is broken down into 3 steps:\n",
"If we could not find the cluster with the given name, then we will create a new cluster here. We will create an `AmlCompute` cluster of `STANDARD_NC6` GPU VMs. This process is broken down into 3 steps:\n",
"1. create the configuration (this step is local and only takes a second)\n",
"2. create the cluster (this step will take about **20 seconds**)\n",
"3. provision the VMs to bring the cluster to the initial size (of 1 in this case). This step will take about **3-5 minutes** and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell"

View File

@@ -4,5 +4,6 @@ Follow these sample notebooks to learn:
1. [Train within notebook](train-within-notebook): train a simple scikit-learn model using the Jupyter kernel and deploy the model to Azure Container Service.
2. [Train on local](train-on-local): train a model using local computer as compute target.
3. [Train on remove VM](train-on-remote-vm): train a model using a remote Azure VM as compute target.
4. [Logging API](logging-api): experiment with various logging functions to create runs and automatically generate graphs.
3. [Train on remote VM](train-on-remote-vm): train a model using a remote Azure VM as compute target.
4. [Train on AmlCompute](train-on-amlcompute): train a model using an AmlCompute cluster as compute target.
5. [Logging API](logging-api): experiment with various logging functions to create runs and automatically generate graphs.

View File

@@ -22,7 +22,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [Configuration](../../../configuration.ipynb) Notebook first if you haven't. Also make sure you have tqdm and matplotlib installed in the current kernel.\n",
"Make sure you go through the [00. Installation and Configuration](../../00.configuration.ipynb) Notebook first if you haven't. Also make sure you have tqdm and matplotlib installed in the current kernel.\n",
"\n",
"```\n",
"(myenv) $ conda install -y tqdm matplotlib\n",

View File

@@ -0,0 +1,516 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Train using Azure Machine Learning Compute\n",
"\n",
"* Initialize a Workspace\n",
"* Create an Experiment\n",
"* Introduction to AmlCompute\n",
"* Submit an AmlCompute run in a few different ways\n",
" - Provision as a run based compute target \n",
" - Provision as a persistent compute target (Basic)\n",
" - Provision as a persistent compute target (Advanced)\n",
"* Additional operations to perform on AmlCompute\n",
"* Find the best model in the run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize a Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create An Experiment\n",
"\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"experiment_name = 'train-on-amlcompute'\n",
"experiment = Experiment(workspace = ws, name = experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction to AmlCompute\n",
"\n",
"Azure Machine Learning Compute is managed compute infrastructure that allows the user to easily create single to multi-node compute of the appropriate VM Family. It is created **within your workspace region** and is a resource that can be used by other users in your workspace. It autoscales by default to the max_nodes, when a job is submitted, and executes in a containerized environment packaging the dependencies as specified by the user. \n",
"\n",
"Since it is managed compute, job scheduling and cluster management are handled internally by Azure Machine Learning service. \n",
"\n",
"For more information on Azure Machine Learning Compute, please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)\n",
"\n",
"If you are an existing BatchAI customer who is migrating to Azure Machine Learning, please read [this article](https://aka.ms/batchai-retirement)\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.\n",
"\n",
"\n",
"The training script `train.py` is already created for you. Let's have a look."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Submit an AmlCompute run in a few different ways\n",
"\n",
"First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.\n",
"\n",
"You can also pass a different region to check availability and then re-create your workspace in that region through the [00. Installation and Configuration](00.configuration.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"\n",
"AmlCompute.supported_vmsizes(workspace = ws)\n",
"#AmlCompute.supported_vmsizes(workspace = ws, location='southcentralus')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create project directory\n",
"\n",
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import shutil\n",
"\n",
"project_folder = './train-on-amlcompute'\n",
"os.makedirs(project_folder, exist_ok=True)\n",
"shutil.copy('train.py', project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Provision as a run based compute target\n",
"\n",
"You can provision AmlCompute as a compute target at run-time. In this case, the compute is auto-created for your run, scales up to max_nodes that you specify, and then **deleted automatically** after the run completes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.runconfig import DEFAULT_CPU_IMAGE\n",
"\n",
"# create a new runconfig object\n",
"run_config = RunConfiguration()\n",
"\n",
"# signal that you want to use AmlCompute to execute script.\n",
"run_config.target = \"amlcompute\"\n",
"\n",
"# AmlCompute will be created in the same region as workspace\n",
"# Set vm size for AmlCompute\n",
"run_config.amlcompute.vm_size = 'STANDARD_D2_V2'\n",
"\n",
"# enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# set Docker base image to the default CPU-based image\n",
"run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE\n",
"\n",
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# auto-prepare the Docker image when used for execution (if it is not already prepared)\n",
"run_config.auto_prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"\n",
"# Now submit a run on AmlCompute\n",
"from azureml.core.script_run_config import ScriptRunConfig\n",
"\n",
"script_run_config = ScriptRunConfig(source_directory=project_folder,\n",
" script='train.py',\n",
" run_config=run_config)\n",
"\n",
"run = experiment.submit(script_run_config)\n",
"\n",
"# Show run details\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# Shows output of the run on stdout.\n",
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.get_metrics()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Provision as a persistent compute target (Basic)\n",
"\n",
"You can provision a persistent AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.\n",
"\n",
"* `vm_size`: VM family of the nodes provisioned by AmlCompute. Simply choose from the supported_vmsizes() above\n",
"* `max_nodes`: Maximum nodes to autoscale to while running a job on AmlCompute"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpucluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',\n",
" max_nodes=4)\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"cpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure & Run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# create a new RunConfig object\n",
"run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to AmlCompute target created in previous step\n",
"run_config.target = cpu_cluster.name\n",
"\n",
"# enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"\n",
"from azureml.core import Run\n",
"from azureml.core import ScriptRunConfig\n",
"\n",
"src = ScriptRunConfig(source_directory=project_folder, \n",
" script='train.py', \n",
" run_config=run_config) \n",
"run = experiment.submit(config=src)\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# Shows output of the run on stdout.\n",
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.get_metrics()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Provision as a persistent compute target (Advanced)\n",
"\n",
"You can also specify additional properties or change defaults while provisioning AmlCompute using a more advanced configuration. This is useful when you want a dedicated cluster of 4 nodes (for example you can set the min_nodes and max_nodes to 4), or want the compute to be within an existing VNet in your subscription.\n",
"\n",
"In addition to `vm_size` and `max_nodes`, you can specify:\n",
"* `min_nodes`: Minimum nodes (default 0 nodes) to downscale to while running a job on AmlCompute\n",
"* `vm_priority`: Choose between 'dedicated' (default) and 'lowpriority' VMs when provisioning AmlCompute. Low Priority VMs use Azure's excess capacity and are thus cheaper but risk your run being pre-empted\n",
"* `idle_seconds_before_scaledown`: Idle time (default 120 seconds) to wait after run completion before auto-scaling to min_nodes\n",
"* `vnet_resourcegroup_name`: Resource group of the **existing** VNet within which AmlCompute should be provisioned\n",
"* `vnet_name`: Name of VNet\n",
"* `subnet_name`: Name of SubNet within the VNet"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpucluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',\n",
" vm_priority='lowpriority',\n",
" min_nodes=2,\n",
" max_nodes=4,\n",
" idle_seconds_before_scaledown='300',\n",
" vnet_resourcegroup_name='<my-resource-group>',\n",
" vnet_name='<my-vnet-name>',\n",
" subnet_name='<my-subnet-name>')\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"cpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure & Run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# create a new RunConfig object\n",
"run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to AmlCompute target created in previous step\n",
"run_config.target = cpu_cluster.name\n",
"\n",
"# enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"\n",
"from azureml.core import Run\n",
"from azureml.core import ScriptRunConfig\n",
"\n",
"src = ScriptRunConfig(source_directory=project_folder, \n",
" script='train.py', \n",
" run_config=run_config) \n",
"run = experiment.submit(config=src)\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# Shows output of the run on stdout.\n",
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.get_metrics()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Additional operations to perform on AmlCompute\n",
"\n",
"You can perform more operations on AmlCompute such as updating the node counts or deleting the compute. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Get_status () gets the latest status of the AmlCompute target\n",
"cpu_cluster.get_status()\n",
"cpu_cluster.serialize()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Update () takes in the min_nodes, max_nodes and idle_seconds_before_scaledown and updates the AmlCompute target\n",
"#cpu_cluster.update(min_nodes=1)\n",
"#cpu_cluster.update(max_nodes=10)\n",
"cpu_cluster.update(idle_seconds_before_scaledown=300)\n",
"#cpu_cluster.update(min_nodes=2, max_nodes=4, idle_seconds_before_scaledown=600)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Delete () is used to deprovision and delete the AmlCompute target. Useful if you want to re-use the compute name \n",
"#'cpucluster' in this case but use a different VM family for instance.\n",
"\n",
"#cpu_cluster.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the remaining notebooks."
]
}
],
"metadata": {
"authors": [
{
"name": "nigup"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,44 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from azureml.core.run import Run
from sklearn.externals import joblib
import os
import numpy as np
os.makedirs('./outputs', exist_ok=True)
X, y = load_diabetes(return_X_y=True)
run = Run.get_context()
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=0)
data = {"train": {"X": X_train, "y": y_train},
"test": {"X": X_test, "y": y_test}}
# list of numbers from 0.0 to 1.0 with a 0.05 interval
alphas = np.arange(0.0, 1.0, 0.05)
for alpha in alphas:
# Use Ridge algorithm to create a regression model
reg = Ridge(alpha=alpha)
reg.fit(data["train"]["X"], data["train"]["y"])
preds = reg.predict(data["test"]["X"])
mse = mean_squared_error(preds, data["test"]["y"])
run.log('alpha', alpha)
run.log('mse', mse)
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
# save model in the outputs folder so it automatically get uploaded
with open(model_file_name, "wb") as file:
joblib.dump(value=reg, filename=os.path.join('./outputs/',
model_file_name))
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))

View File

@@ -29,7 +29,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [Configuration](../../../configuration.ipynb) Notebook first if you haven't."
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{

View File

@@ -16,7 +16,7 @@
"# 04. Train in a remote Linux VM\n",
"* Create Workspace\n",
"* Create `train.py` file\n",
"* Create (or attach) DSVM as compute resource.\n",
"* Create and Attach a Remote VM (eg. DSVM) as compute resource.\n",
"* Upoad data files into default datastore\n",
"* Configure & execute a run in a few different ways\n",
" - Use system-built conda\n",
@@ -30,7 +30,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [Configuration](../../../configuration.ipynb) Notebook first if you haven't."
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
@@ -188,10 +188,18 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Linux DSVM as a compute target\n",
"## Create and Attach a DSVM as a compute target\n",
"\n",
"**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.\n",
"\n",
"```shell\n",
"# create a DSVM in your resource group\n",
"# note you need to be at least a contributor to the resource group in order to execute this command successfully\n",
"(myenv) $ az vm create --resource-group <resource_group_name> --name <some_vm_name> --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username <username> --admin-password <password> --generate-ssh-keys --authentication-type password\n",
"```\n",
"\n",
"**Note**: You can also use [this url](https://portal.azure.com/#create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal\n",
"\n",
"**Note**: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
" \n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you switch to a different port (such as 5022), you can specify the port number in the provisioning configuration object."
]
},
@@ -201,52 +209,27 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.compute import RemoteCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"import os\n",
"\n",
"username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')\n",
"address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')\n",
"\n",
"compute_target_name = 'cpudsvm'\n",
"\n",
"# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase \n",
"try:\n",
" dsvm_compute = DsvmCompute(workspace=ws, name=compute_target_name)\n",
" print('found existing:', dsvm_compute.name)\n",
" attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)\n",
" print('found existing:', attached_dsvm_compute.name)\n",
"except ComputeTargetException:\n",
" print('creating new.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size=\"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name=compute_target_name, provisioning_configuration=dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Attach an existing Linux DSVM\n",
"You can also attach an existing Linux VM as a compute target. To create one, you can use Azure CLI command:\n",
"\n",
"```\n",
"az vm create -n cpudsvm -l eastus2 -g <my-resource-group> --size Standard_D2_v2 --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --generate-ssh-keys\n",
"```\n",
"\n",
"The ```--generate-ssh-keys``` automatically places the ssh keys to standard location, typically to ~/.ssh folder. The default port is 22."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"'''\n",
"from azureml.core.compute import ComputeTarget, RemoteCompute \n",
"attach_config = RemoteCompute.attach_configuration(username='<my_username>',\n",
" address='<ip_adress_or_fqdn>',\n",
" ssh_port=22,\n",
" private_key_file='./.ssh/id_rsa')\n",
"attached_dsvm_compute = ComputeTarget.attach(workspace=ws,\n",
" name='attached_vm',\n",
" attach_configuration=attach_config)\n",
"attached_dsvm_compute.wait_for_completion(show_output=True)\n",
"'''\n"
" attached_dsvm_compute = RemoteCompute.attach(workspace=ws,\n",
" name=compute_target_name,\n",
" username=username,\n",
" address=address,\n",
" ssh_port=22,\n",
" private_key_file='./.ssh/id_rsa')\n",
" \n",
" attached_dsvm_compute.wait_for_completion(show_output=True)"
]
},
{
@@ -298,7 +281,7 @@
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"conda_run_config.target = dsvm_compute.name\n",
"conda_run_config.target = attached_dsvm_compute.name\n",
"\n",
"# set the data reference of the run configuration\n",
"conda_run_config.data_references = {ds.name: dr}\n",
@@ -368,7 +351,7 @@
"vm_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"vm_run_config.target = dsvm_compute.name\n",
"vm_run_config.target = attached_dsvm_compute.name\n",
"\n",
"# set the data reference of the run coonfiguration\n",
"conda_run_config.data_references = {ds.name: dr}\n",
@@ -477,7 +460,7 @@
"docker_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"docker_run_config.target = dsvm_compute.name\n",
"docker_run_config.target = attached_dsvm_compute.name\n",
"\n",
"# Use Docker in the remote VM\n",
"docker_run_config.environment.docker.enabled = True\n",

View File

@@ -57,7 +57,7 @@
"---\n",
"\n",
"## Setup\n",
"Make sure you have completed the [Configuration](../../../configuration.ipynb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.\n",
"Make sure you have completed the [Configuration](..\\..\\configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.\n",
"\n",
"We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.\n",
"```shell\n",

11
tutorials/README.md Normal file
View File

@@ -0,0 +1,11 @@
## Azure Machine Learning service Tutorial
Complete these tutorials to learn how to train and deploy models using Azure Machine Learning services and Python SDK. These Notebooks accompany the [tutorial articles starting here]([https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-train-models-with-aml]).
As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order.
* [Tutorial #1](img-classification-part1-training.ipynb): Train an image classification model with Azure Machine Learning
* [Tutorial #2](img-classification-part2-deploy.ipynb): Deploy an image classification model from first tutorial in Azure Container Instance (ACI)
* [Tutorial #3](regression-part1-data-prep.ipynb): Use data preparation.
Also find quickstarts and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).

3032
tutorials/dflows.dprep Normal file

File diff suppressed because one or more lines are too long

View File

@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial #1: Prepare data for regression modeling"
"# Tutorial (part 1): Prepare data for regression modeling"
]
},
{
@@ -101,7 +101,7 @@
"all_columns = dprep.ColumnSelector(term=\".*\", use_regex=True)\n",
"drop_if_all_null = [all_columns, dprep.ColumnRelationship(dprep.ColumnRelationship.ALL)]\n",
"useful_columns = [\n",
" \"cost\", \"distance\"\"distance\", \"dropoff_datetime\", \"dropoff_latitude\", \"dropoff_longitude\",\n",
" \"cost\", \"distance\", \"dropoff_datetime\", \"dropoff_latitude\", \"dropoff_longitude\",\n",
" \"passengers\", \"pickup_datetime\", \"pickup_latitude\", \"pickup_longitude\", \"store_forward\", \"vendor\"\n",
"]"
]
@@ -337,6 +337,23 @@
"combined_df = combined_df.replace(columns=\"store_forward\", find=\"0\", replace_with=\"N\").fill_nulls(\"store_forward\", \"N\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Execute another `replace` function, this time on the `distance` field. This reformats distance values that are incorrectly labeled as `.00`, and fills any nulls with zeros. Convert the `distance` field to numerical format."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"combined_df = combined_df.replace(columns=\"distance\", find=\".00\", replace_with=0).fill_nulls(\"distance\", 0)\n",
"combined_df = combined_df.to_number([\"distance\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -507,6 +524,23 @@
"tmp_df.get_profile()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before packaging the dataflow, perform two final filters on the data set. To eliminate incorrect data points, filter the dataflow on records where both the `cost` and `distance` are greater than zero."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tmp_df = tmp_df.filter(dprep.col(\"distance\") > 0)\n",
"tmp_df = tmp_df.filter(dprep.col(\"cost\") > 0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -520,9 +554,12 @@
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"file_path = os.path.join(os.getcwd(), \"dflows.dprep\")\n",
"\n",
"dflow_prepared = tmp_df\n",
"package = dprep.Package([dflow_prepared])\n",
"package.save(\".\\dflows\")"
"package.save(file_path)"
]
},
{
@@ -536,7 +573,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Delete the file `dflows` (whether you are running locally or in Azure Notebooks) in your current directory if you do not wish to continue with part two of the tutorial. If you continue on to part two, you will need the `dflows` file in the current directory."
"Delete the file `dflows.dprep` (whether you are running locally or in Azure Notebooks) in your current directory if you do not wish to continue with part two of the tutorial. If you continue on to part two, you will need the `dflows.dprep` file in the current directory."
]
},
{
@@ -571,9 +608,9 @@
}
],
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial #2: Train a regression model with automated machine learning\n",
"# Tutorial (part 2): Use automated machine learning to build your regression model \n",
"\n",
"This tutorial is **part two of a two-part tutorial series**. In the previous tutorial, you [prepared the NYC taxi data for regression modeling](regression-part1-data-prep.ipynb).\n",
"\n",
@@ -112,7 +112,11 @@
"outputs": [],
"source": [
"import azureml.dataprep as dprep\n",
"package_saved = dprep.Package.open(\".\\dflow\")\n",
"import os\n",
"\n",
"file_path = os.path.join(os.getcwd(), \"dflows.dprep\")\n",
"\n",
"package_saved = dprep.Package.open(file_path)\n",
"dflow_prepared = package_saved.dataflows[0]\n",
"dflow_prepared.get_profile()"
]
@@ -130,7 +134,7 @@
"metadata": {},
"outputs": [],
"source": [
"dflow_X = dflow_prepared.keep_columns(['pickup_weekday', 'dropoff_latitude', 'dropoff_longitude','pickup_hour','pickup_longitude','pickup_latitude','passengers'])\n",
"dflow_X = dflow_prepared.keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor'])\n",
"dflow_y = dflow_prepared.keep_columns('cost')"
]
},
@@ -155,7 +159,7 @@
"x_df = dflow_X.to_pandas_dataframe()\n",
"y_df = dflow_y.to_pandas_dataframe()\n",
"\n",
"x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=123)\n",
"x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)\n",
"# flatten y_train to 1d array\n",
"y_train.values.flatten()"
]
@@ -373,7 +377,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Compare the predicted cost values with the actual cost values. Use the `y_test` dataframe, and convert it to a list to compare to the predicted values. The function `mean_absolute_error` takes two arrays of values, and calculates the average absolute value error between them. In this example, a mean absolute error of 3.5 would mean that on average, the model predicts the cost within plus or minus 3.5 of the actual value."
"Create a scatter plot to visualize the predicted cost values compared to the actual cost values. The following code uses the `distance` feature as the x-axis, and trip `cost` as the y-axis. The first 100 predicted and actual cost values are created as separate series, in order to compare the variance of predicted cost at each trip distance value. Examining the plot shows that the distance/cost relationship is nearly linear, and the predicted cost values are in most cases very close to the actual cost values for the same trip distance."
]
},
{
@@ -382,10 +386,44 @@
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import mean_absolute_error\n",
"import matplotlib.pyplot as plt\n",
"\n",
"fig = plt.figure(figsize=(14, 10))\n",
"ax1 = fig.add_subplot(111)\n",
"\n",
"distance_vals = [x[4] for x in x_test.values]\n",
"y_actual = y_test.values.flatten().tolist()\n",
"mean_absolute_error(y_actual, y_predict)"
"\n",
"ax1.scatter(distance_vals[:100], y_predict[:100], s=18, c='b', marker=\"s\", label='Predicted')\n",
"ax1.scatter(distance_vals[:100], y_actual[:100], s=18, c='r', marker=\"o\", label='Actual')\n",
"\n",
"ax1.set_xlabel('distance (mi)')\n",
"ax1.set_title('Predicted and Actual Cost/Distance')\n",
"ax1.set_ylabel('Cost ($)')\n",
"\n",
"plt.legend(loc='upper left', prop={'size': 12})\n",
"plt.rcParams.update({'font.size': 14})\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Calculate the `root mean squared error` of the results. Use the `y_test` dataframe, and convert it to a list to compare to the predicted values. The function `mean_squared_error` takes two arrays of values, and calculates the average squared error between them. Taking the square root of the result gives an error in the same units as the y variable (cost), and indicates roughly how far your predictions are from the actual value. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import mean_squared_error\n",
"from math import sqrt\n",
"\n",
"rmse = sqrt(mean_squared_error(y_actual, y_predict))\n",
"rmse"
]
},
{
@@ -444,9 +482,9 @@
}
],
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {