version 1.0.8

This commit is contained in:
Roope Astala
2019-01-14 15:13:30 -05:00
parent f724cb4d9b
commit 3ca40c0817
58 changed files with 24874 additions and 25118 deletions

View File

@@ -1,36 +1,34 @@
# Azure Machine Learning service sample notebooks
---
# Azure Machine Learning service example notebooks
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK
which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK
allows you the choice of using local or cloud compute resources, while managing
and maintaining the complete data science workflow from the cloud.
* Read [instructions on setting up notebooks](./NBSETUP.md) to run these notebooks.
![Azure ML workflow](https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/service/media/overview-what-is-azure-ml/aml.png)
* Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
## How to use and navigate the example notebooks?
## Getting Started
You can set up you own Python environment or use Azure Notebooks with Azure ML SDK pre-installed. Read [these instructions](./NBSETUP.md) to set up your environment and clone the example notebooks.
These examples will provide you with an effective way to get started using AML. Once you're familiar with
some of the capabilities, explore the repository for specific topics.
You should always run the [Configuration](./configuration.ipynb) notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.
- [Configuration](./configuration.ipynb) configures your notebook library to easily connect to an
Azure Machine Learning workspace, and sets up your workspace to be used by many of the other examples. You should
always run this first when setting up a notebook library on a new machine or in a new environment
- [Train in notebook](./how-to-use-azureml/training/train-within-notebook) shows how to create a model directly in a notebook while recording
metrics and deploy that model to a test service
- [Train on remote](./how-to-use-azureml/training/train-on-remote-vm) takes the previous example and shows how to create the model on a cloud compute target
- [Production deploy to AKS](./how-to-use-azureml/deployment/production-deploy-to-aks) shows how to create a production grade inferencing webservice
If you want to...
* ...try out and explore Azure ML, start with image classification tutorials [part 1 training](./tutorials/img-classification-part1-training.ipynb) and [part 2 deployment](./tutorials/img-classification-part2-deploy.ipynb).
* ...learn about experimentation and tracking run history, first [train within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then try [training on remote VM](./how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) and [using logging APIs](./how-to-use-azureml/training/logging-api/logging-api.ipynb).
* ...train deep learning models at scale, first learn about [Machine Learning Compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and then try [distributed hyperparameter tuning](./how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) and [distributed training](./how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb).
* ...deploy model as realtime scoring service, first learn the basics by [training within Notebook and deploying to Azure Container Instance](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), then learn how to [register and manage models, and create Docker images](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), and [production deploy models on Azure Kubernetes Cluster](./how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb).
* ...deploy models as batch scoring service, first [train a model within Notebook](./how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb), learn how to [register and manage models](./how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb), then [create Machine Learning Compute for scoring compute](./how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb), and [use Machine Learning Pipelines to deploy your model](./how-to-use-azureml/machine-learning-pipelines/pipeline-mpi-batch-prediction.ipynb).
* ...monitor your deployed models, learn about using [App Insights](./how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) and [model data collection](./how-to-use-azureml/deployment/enable-data-collection-for-models-in-aks/enable-data-collection-for-models-in-aks.ipynb).
## Tutorials
The [Tutorials](./tutorials) folder contains notebooks for the tutorials described in the [Azure Machine Learning documentation](https://aka.ms/aml-docs)
## How to use AML
## How to use Azure ML
The [How to use AML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
The [How to use Azure ML](./how-to-use-azureml) folder contains specific examples demonstrating the features of the Azure Machine Learning SDK
- [Training](./how-to-use-azureml/training) - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets.
- [Training with Deep Learning](./how-to-use-azureml/training-with-deep-learning) - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
@@ -38,3 +36,21 @@ The [How to use AML](./how-to-use-azureml) folder contains specific examples dem
- [Machine Learning Pipelines](./how-to-use-azureml/machine-learning-pipelines) - Examples showing how to create and use reusable pipelines for training and batch scoring
- [Deployment](./how-to-use-azureml/deployment) - Examples showing how to deploy and manage machine learning models and solutions
- [Azure Databricks](./how-to-use-azureml/azure-databricks) - Examples showing how to use Azure ML with Azure Databricks
---
## Documentation
* Quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
* [Python SDK reference]( https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py)
---
## Projects using Azure Machine Learning
Visit following repos to see projects contributed by Azure ML users:
- [Fine tune natural language processing models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)

View File

@@ -23,6 +23,10 @@ if errorlevel 1 goto ErrorExit
call python -m ipykernel install --user --name %conda_env_name% --display-name "Python (%conda_env_name%)"
REM azureml.widgets is now installed as part of the pip install under the conda env.
REM Removing the old user install so that the notebooks will use the latest widget.
call jupyter nbextension uninstall --user --py azureml.widgets
echo.
echo.
echo ***************************************

View File

@@ -22,11 +22,13 @@ fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain]
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
jupyter nbextension uninstall --user --py azureml.widgets &&
echo "" &&
echo "" &&
echo "***************************************" &&

View File

@@ -22,13 +22,15 @@ fi
if source activate $CONDA_ENV_NAME 2> /dev/null
then
echo "Upgrading azureml-sdk[automl,notebooks,explain] in existing conda environment" $CONDA_ENV_NAME
pip install --upgrade azureml-sdk[automl,notebooks,explain]
pip install --upgrade azureml-sdk[automl,notebooks,explain] &&
jupyter nbextension uninstall --user --py azureml.widgets
else
conda env create -f $AUTOML_ENV_FILE -n $CONDA_ENV_NAME &&
source activate $CONDA_ENV_NAME &&
conda install lightgbm -c conda-forge -y &&
python -m ipykernel install --user --name $CONDA_ENV_NAME --display-name "Python ($CONDA_ENV_NAME)" &&
pip install numpy==1.15.3
jupyter nbextension uninstall --user --py azureml.widgets &&
pip install numpy==1.15.3 &&
echo "" &&
echo "" &&
echo "***************************************" &&

View File

@@ -62,11 +62,8 @@
"source": [
"import json\n",
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
@@ -102,7 +99,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -228,7 +226,8 @@
"description = 'AutoML Model'\n",
"tags = None\n",
"model = local_run.register_model(description = description, tags = tags)\n",
"local_run.model_id # This will be written to the script file later in the notebook."
"\n",
"print(local_run.model_id) # This will be written to the script file later in the notebook."
]
},
{

View File

@@ -61,11 +61,8 @@
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
@@ -73,8 +70,7 @@
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -100,7 +96,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -135,8 +132,6 @@
"metadata": {},
"outputs": [],
"source": [
"from sklearn import datasets\n",
"\n",
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",

View File

@@ -60,11 +60,8 @@
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
@@ -72,8 +69,7 @@
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -99,7 +95,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -134,8 +131,6 @@
"metadata": {},
"outputs": [],
"source": [
"from sklearn import datasets\n",
"\n",
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 100 rows from training so that they can be used for test.\n",

View File

@@ -1,154 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automated Machine Learning Configuration\n",
"\n",
"In this example you will create an Azure Machine Learning `Workspace` object and initialize your notebook directory to easily reload this object from a configuration file. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<subscription_id>\"\n",
"resource_group = \"myrg\"\n",
"workspace_name = \"myws\"\n",
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -80,7 +80,6 @@
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import time\n",
"\n",
"import pandas as pd\n",
@@ -117,7 +116,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -323,7 +323,6 @@
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"import pandas as pd\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
@@ -427,8 +426,6 @@
"source": [
"#Randomly select digits and test\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import random\n",
"import numpy as np\n",
"\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
@@ -482,7 +479,7 @@
"metadata": {},
"outputs": [],
"source": [
"digits_complete.to_pandas_dataframe().shape\n",
"print(digits_complete.to_pandas_dataframe().shape)\n",
"labels_column = 'Column64'\n",
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"

View File

@@ -80,7 +80,6 @@
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"\n",
"import pandas as pd\n",
"\n",
@@ -115,7 +114,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -274,7 +274,6 @@
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"import pandas as pd\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
@@ -378,8 +377,6 @@
"source": [
"#Randomly select digits and test\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import random\n",
"import numpy as np\n",
"\n",
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
@@ -433,7 +430,7 @@
"metadata": {},
"outputs": [],
"source": [
"digits_complete.to_pandas_dataframe().shape\n",
"print(digits_complete.to_pandas_dataframe().shape)\n",
"labels_column = 'Column64'\n",
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"

View File

@@ -53,22 +53,11 @@
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"import re\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"import json\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.run import Run\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
@@ -152,7 +141,7 @@
"for run in automl_runs:\n",
" properties = run.get_properties()\n",
" tags = run.get_tags()\n",
" amlsettings = eval(properties['RawAMLSettingsString'])\n",
" amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
" if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
" else:\n",
@@ -196,7 +185,7 @@
"properties = ml_run.get_properties()\n",
"tags = ml_run.get_tags()\n",
"status = ml_run.get_details()\n",
"amlsettings = eval(properties['RawAMLSettingsString'])\n",
"amlsettings = json.loads(properties['AMLSettingsJsonString'])\n",
"if 'iterations' in tags:\n",
" iterations = tags['iterations']\n",
"else:\n",
@@ -297,7 +286,7 @@
"description = 'AutoML Model'\n",
"tags = None\n",
"ml_run.register_model(description = description, tags = tags)\n",
"ml_run.model_id # Use this id to deploy the model as a web service in Azure."
"print(ml_run.model_id) # Use this id to deploy the model as a web service in Azure."
]
},
{

View File

@@ -58,7 +58,6 @@
"import azureml.core\n",
"import pandas as pd\n",
"import numpy as np\n",
"import os\n",
"import logging\n",
"import warnings\n",
"# Squash warning messages for cleaner output in the notebook\n",
@@ -68,9 +67,7 @@
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score"
]
},
@@ -98,7 +95,8 @@
"output['Project Directory'] = project_folder\n",
"output['Run History Name'] = experiment_name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -289,61 +287,6 @@
"y_pred"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define a Check Data Function\n",
"\n",
"Remove the nan values from y_test to avoid error when calculate metrics "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def _check_calc_input(y_true, y_pred, rm_na=True):\n",
" \"\"\"\n",
" Check that 'y_true' and 'y_pred' are non-empty and\n",
" have equal length.\n",
"\n",
" :param y_true: Vector of actual values\n",
" :type y_true: array-like\n",
"\n",
" :param y_pred: Vector of predicted values\n",
" :type y_pred: array-like\n",
"\n",
" :param rm_na:\n",
" If rm_na=True, remove entries where y_true=NA and y_pred=NA.\n",
" :type rm_na: boolean\n",
"\n",
" :return:\n",
" Tuple (y_true, y_pred). if rm_na=True,\n",
" the returned vectors may differ from their input values.\n",
" :rtype: Tuple with 2 entries\n",
" \"\"\"\n",
" if len(y_true) != len(y_pred):\n",
" raise ValueError(\n",
" 'the true values and prediction values do not have equal length.')\n",
" elif len(y_true) == 0:\n",
" raise ValueError(\n",
" 'y_true and y_pred are empty.')\n",
" # if there is any non-numeric element in the y_true or y_pred,\n",
" # the ValueError exception will be thrown.\n",
" y_true = np.array(y_true).astype(float)\n",
" y_pred = np.array(y_pred).astype(float)\n",
" if rm_na:\n",
" # remove entries both in y_true and y_pred where at least\n",
" # one element in y_true or y_pred is missing\n",
" y_true_rm_na = y_true[~(np.isnan(y_true) | np.isnan(y_pred))]\n",
" y_pred_rm_na = y_pred[~(np.isnan(y_true) | np.isnan(y_pred))]\n",
" return (y_true_rm_na, y_pred_rm_na)\n",
" else:\n",
" return y_true, y_pred"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -357,7 +300,22 @@
"metadata": {},
"outputs": [],
"source": [
"y_test,y_pred = _check_calc_input(y_test,y_pred)"
"if len(y_test) != len(y_pred):\n",
" raise ValueError(\n",
" 'the true values and prediction values do not have equal length.')\n",
"elif len(y_test) == 0:\n",
" raise ValueError(\n",
" 'y_true and y_pred are empty.')\n",
"\n",
"# if there is any non-numeric element in the y_true or y_pred,\n",
"# the ValueError exception will be thrown.\n",
"y_test_f = np.array(y_test).astype(float)\n",
"y_pred_f = np.array(y_pred).astype(float)\n",
"\n",
"# remove entries both in y_true and y_pred where at least\n",
"# one element in y_true or y_pred is missing\n",
"y_test = y_test_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))]\n",
"y_pred = y_pred_f[~(np.isnan(y_test_f) | np.isnan(y_pred_f))]"
]
},
{

View File

@@ -59,7 +59,6 @@
"import azureml.core\n",
"import pandas as pd\n",
"import numpy as np\n",
"import os\n",
"import logging\n",
"import warnings\n",
"# Squash warning messages for cleaner output in the notebook\n",
@@ -69,7 +68,6 @@
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error"
]
},
@@ -97,7 +95,8 @@
"output['Project Directory'] = project_folder\n",
"output['Run History Name'] = experiment_name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{

View File

@@ -63,11 +63,8 @@
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
@@ -75,8 +72,7 @@
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -102,7 +98,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -135,8 +132,6 @@
"metadata": {},
"outputs": [],
"source": [
"from scipy import sparse\n",
"\n",
"digits = datasets.load_digits()\n",
"X_train = digits.data[10:,:]\n",
"y_train = digits.target[10:]\n",

View File

@@ -57,15 +57,12 @@
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"import pandas as pd\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -92,7 +89,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{

View File

@@ -58,20 +58,15 @@
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -97,7 +92,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -354,9 +350,6 @@
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"from sklearn import datasets\n",
"from sklearn.metrics import mean_squared_error, r2_score\n",
"\n",
"# Set up a multi-plot chart.\n",
@@ -375,8 +368,8 @@
"a0.set_ylabel('Residual Values', fontsize = 12)\n",
"\n",
"# Plot a histogram.\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step');\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10);\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step')\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10)\n",
"\n",
"# Plot residual values of test set.\n",
"a1.axis([0, 90, -200, 200])\n",

View File

@@ -66,21 +66,15 @@
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -106,7 +100,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{

View File

@@ -67,10 +67,8 @@
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
@@ -78,8 +76,7 @@
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -105,7 +102,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -170,7 +168,7 @@
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
" compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
" \n",
" # For a more detailed view of current AmlCompute status, use the 'status' property."
" # For a more detailed view of current AmlCompute status, use get_status()."
]
},
{

View File

@@ -59,21 +59,16 @@
"source": [
"import logging\n",
"import os\n",
"import random\n",
"import time\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -100,7 +95,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{
@@ -169,7 +165,8 @@
"metadata": {},
"outputs": [],
"source": [
"mkdir data"
"if not os.path.isdir('data'):\n",
" os.mkdir('data') "
]
},
{
@@ -218,7 +215,6 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace, Datastore\n",
"#blob_datastore = Datastore(ws, blob_datastore_name)\n",
"ds = ws.get_default_datastore()\n",
"print(ds.datastore_type, ds.account_name, ds.container_name)"

View File

@@ -67,11 +67,9 @@
"source": [
"import logging\n",
"import os\n",
"import random\n",
"import time\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
@@ -79,8 +77,7 @@
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -106,7 +103,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{

View File

@@ -51,11 +51,8 @@
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
@@ -63,8 +60,7 @@
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -93,7 +89,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{

View File

@@ -61,20 +61,13 @@
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
"from azureml.train.automl import AutoMLConfig"
]
},
{
@@ -101,7 +94,8 @@
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]
},
{

View File

@@ -473,9 +473,9 @@
}
],
"kernelspec": {
"display_name": "Python [default]",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -403,11 +403,11 @@
"source": [
"### b. Connect Blob to Power Bi (Small Data only)\n",
"1. Download and Open PowerBi Desktop\n",
"2. Select Get Data and click on Azure Blob Storage >> Connect\n",
"2. Select \u201cGet Data\u201d and click on \u201cAzure Blob Storage\u201d >> Connect\n",
"3. Add your storage account and enter your storage key.\n",
"4. Select the container where your Data Collection is stored and click on Edit. \n",
"5. In the query editor, click under “Name” column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3\n",
"6. Click on the double arrow aside the Content column to combine the files. \n",
"5. In the query editor, click under \u201cName\u201d column and add your Storage account Model path into the filter. Note: if you want to only look into files from a specific year or month, just expand the filter path. For example, just look into March data: /modeldata/subscriptionid>/resourcegroupname>/workspacename>/webservicename>/modelname>/modelversion>/identifier>/year>/3\n",
"6. Click on the double arrow aside the \u201cContent\u201d column to combine the files. \n",
"7. Click OK and the data will preload.\n",
"8. You can now click Close and Apply and start building your custom reports on your Model Input data."
]
@@ -455,9 +455,9 @@
}
],
"kernelspec": {
"display_name": "Python [default]",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -4,7 +4,7 @@ These tutorials show how to create and deploy Open Neural Network eXchange ([ONN
## Tutorials
0. [Configure your Azure Machine Learning Workspace](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb)
0. [Configure your Azure Machine Learning Workspace](../../../configuration.ipynb)
#### Obtain models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime Inference
1. [Handwritten Digit Classification (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb)

View File

@@ -33,7 +33,7 @@
"To make the best use of your time, make sure you have done the following:\n",
"\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](../00.configuration.ipynb) notebook to:\n",
"* Go through the [configuration](../../../configuration.ipynb) notebook to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (config.json)"
]
@@ -71,7 +71,7 @@
"source": [
"## Convert model to ONNX\n",
"\n",
"First we download the CoreML model. We use the CoreML model listed at https://coreml.store/tinyyolo. This may take a few minutes."
"First we download the CoreML model. We use the CoreML model from [Matthijs Hollemans's tutorial](https://github.com/hollance/YOLO-CoreML-MPSNNGraph). This may take a few minutes."
]
},
{
@@ -82,8 +82,8 @@
"source": [
"import urllib.request\n",
"\n",
"onnx_model_url = \"https://s3-us-west-2.amazonaws.com/coreml-models/TinyYOLO.mlmodel\"\n",
"urllib.request.urlretrieve(onnx_model_url, filename=\"TinyYOLO.mlmodel\")\n"
"coreml_model_url = \"https://github.com/hollance/YOLO-CoreML-MPSNNGraph/raw/master/TinyYOLO-CoreML/TinyYOLO-CoreML/TinyYOLO.mlmodel\"\n",
"urllib.request.urlretrieve(coreml_model_url, filename=\"TinyYOLO.mlmodel\")\n"
]
},
{

View File

@@ -34,7 +34,7 @@
"## Prerequisites\n",
"\n",
"### 1. Install Azure ML SDK and create a new workspace\n",
"Please follow [Azure ML configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) to set up your environment.\n",
"Please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
"\n",
"### 2. Install additional packages needed for this Notebook\n",
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n",

View File

@@ -34,7 +34,7 @@
"## Prerequisites\n",
"\n",
"### 1. Install Azure ML SDK and create a new workspace\n",
"Please follow [Azure ML configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) to set up your environment.\n",
"Please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
"\n",
"### 2. Install additional packages needed for this tutorial notebook\n",
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed. \n",

View File

@@ -33,7 +33,7 @@
"To make the best use of your time, make sure you have done the following:\n",
"\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](../00.configuration.ipynb) notebook to:\n",
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (config.json)"
]

View File

@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 10. Register Model, Create Image and Deploy Service\n",
"## Register Model, Create Image and Deploy Service\n",
"\n",
"This example shows how to deploy a web service in step-by-step fashion:\n",
"\n",
@@ -24,9 +24,9 @@
" 5. Deploy the image as web service\n",
" \n",
"**IMPORTANT**:\n",
" * This notebook requires you to first complete \"01.SDK-101-Train-and-Deploy-to-ACI.ipynb\" Notebook\n",
" * This notebook requires you to first complete [train-within-notebook](../../training/train-within-notebook/train-within-notebook.ipynb) example\n",
" \n",
"The 101 Notebook taught you how to deploy a web service directly from model in one step. This Notebook shows a more advanced approach that gives you more control over model versions and Docker image versions. "
"The train-within-notebook example taught you how to deploy a web service directly from model in one step. This Notebook shows a more advanced approach that gives you more control over model versions and Docker image versions. "
]
},
{
@@ -34,7 +34,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
"Make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
]
},
{

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -310,9 +310,9 @@
}
],
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -260,9 +260,9 @@
"metadata": {},
"outputs": [],
"source": [
"# For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property\n",
"# For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
"# example: un-comment the following line.\n",
"# print(aml_compute.status.serialize())"
"# print(aml_compute.get_status().serialize())"
]
},
{
@@ -584,9 +584,9 @@
}
],
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -100,9 +100,9 @@
"metadata": {},
"outputs": [],
"source": [
"# For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property\n",
"# For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
"# example: un-comment the following line.\n",
"# print(aml_compute.status.serialize())"
"# print(aml_compute.get_status().serialize())"
]
},
{
@@ -346,9 +346,9 @@
}
],
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -346,9 +346,9 @@
}
],
"kernelspec": {
"display_name": "Python [default]",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -317,8 +317,9 @@
"- **existing_cluster_id:** Cluster ID of an existing Interactive cluster on the Databricks workspace. If you are providing this, do not provide any of the parameters below that are used to create a new cluster such as spark_version, node_type, etc.\n",
"- **spark_version:** Version of spark for the databricks run cluster. default value: 4.0.x-scala2.11\n",
"- **node_type:** Azure vm node types for the databricks run cluster. default value: Standard_D3_v2\n",
"- **num_workers:** Number of workers for the databricks run cluster\n",
"- **autoscale:** The autoscale configuration for the databricks run cluster\n",
"- **num_workers:** Specifies a static number of workers for the databricks run cluster\n",
"- **min_workers:** Specifies a min number of workers to use for auto-scaling the databricks run cluster\n",
"- **max_workers:** Specifies a max number of workers to use for auto-scaling the databricks run cluster\n",
"- **spark_env_variables:** Spark environment variables for the databricks run cluster (dictionary of {str:str}). default value: {'PYSPARK_PYTHON': '/databricks/python3/bin/python3'}\n",
"- **notebook_path:** Path to the notebook in the databricks instance. If you are providing this, do not provide python script related paramaters or JAR related parameters.\n",
"- **notebook_params:** Parameters for the databricks notebook (dictionary of {str:str}). Fetch this inside the notebook using dbutils.widgets.get(\"myparam\")\n",
@@ -342,7 +343,7 @@
"- **version:** Optional version tag to denote a change in functionality for the step\n",
"\n",
"\\* *denotes required fields* \n",
"*You must provide exactly one of num_workers or autoscale paramaters* \n",
"*You must provide exactly one of num_workers or min_workers and max_workers paramaters* \n",
"*You must provide exactly one of databricks_compute or databricks_compute_name parameters*\n",
"\n",
"## Use runconfig to specify library dependencies\n",
@@ -388,7 +389,7 @@
"metadata": {},
"source": [
"### 1. Running the demo notebook already added to the Databricks workspace\n",
"Create a notebook in the Azure Databricks workspace, and provide the path to that notebook as the value associated with the environment variable \"DATABRICKS_NOTEBOOK_PATH\". This will then set the variable notebook_path when you run the code cell below:"
"Create a notebook in the Azure Databricks workspace, and provide the path to that notebook as the value associated with the environment variable \"DATABRICKS_NOTEBOOK_PATH\". This will then set the variable\u00c2\u00a0notebook_path\u00c2\u00a0when you run the code cell below:"
]
},
{
@@ -425,11 +426,10 @@
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#steps = [dbNbStep]\n",
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
"#pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline)\n",
"#pipeline_run.wait_for_completion()"
"steps = [dbNbStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
@@ -445,9 +445,8 @@
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#from azureml.widgets import RunDetails\n",
"#RunDetails(pipeline_run).show()"
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
@@ -497,11 +496,10 @@
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#steps = [dbPythonInDbfsStep]\n",
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
"#pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline)\n",
"#pipeline_run.wait_for_completion()"
"steps = [dbPythonInDbfsStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
@@ -517,9 +515,8 @@
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#from azureml.widgets import RunDetails\n",
"#RunDetails(pipeline_run).show()"
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
@@ -640,11 +637,10 @@
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#steps = [dbJarInDbfsStep]\n",
"#pipeline = Pipeline(workspace=ws, steps=steps)\n",
"#pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline)\n",
"#pipeline_run.wait_for_completion()"
"steps = [dbJarInDbfsStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
@@ -660,9 +656,8 @@
"metadata": {},
"outputs": [],
"source": [
"#PUBLISHONLY\n",
"#from azureml.widgets import RunDetails\n",
"#RunDetails(pipeline_run).show()"
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
@@ -681,9 +676,9 @@
}
],
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -158,9 +158,9 @@
"metadata": {},
"outputs": [],
"source": [
"# For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property\n",
"# For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
"# example: un-comment the following line.\n",
"# print(aml_compute.status.serialize())"
"# print(aml_compute.get_status().serialize())"
]
},
{
@@ -396,9 +396,9 @@
}
],
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -76,7 +76,7 @@
"metadata": {},
"source": [
"### Set up datastores\n",
"First, lets access the datastore that has the model, labels, and images. \n",
"First, let\u00e2\u20ac\u2122s access the datastore that has the model, labels, and images. \n",
"\n",
"### Create a datastore that points to a blob container containing sample images\n",
"\n",
@@ -106,7 +106,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, lets specify the default datastore for the outputs."
"Next, let\u00e2\u20ac\u2122s specify the default datastore for the outputs."
]
},
{
@@ -193,8 +193,8 @@
" # if no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
" # For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property \n",
" print(compute_target.status.serialize())"
" # For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n",
" print(compute_target.get_status().serialize())"
]
},
{
@@ -295,7 +295,7 @@
"metadata": {},
"source": [
"## Build and run the batch scoring pipeline\n",
"You have everything you need to build the pipeline. Lets put all these together."
"You have everything you need to build the pipeline. Let\u00e2\u20ac\u2122s put all these together."
]
},
{
@@ -551,9 +551,9 @@
}
],
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -588,9 +588,9 @@
}
],
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3.6",
"language": "python",
"name": "python3"
"name": "python36"
},
"language_info": {
"codemirror_mode": {

View File

@@ -23,7 +23,7 @@
"source": [
"## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb]() notebook to:\n",
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]
@@ -124,8 +124,8 @@
"\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current AmlCompute. \n",
"print(compute_target.status.serialize())"
"# use get_status() to get a detailed status for the current AmlCompute. \n",
"print(compute_target.get_status().serialize())"
]
},
{

View File

@@ -22,7 +22,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"* Go through the [Configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`\n",
"* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`\n",
"* Review the [tutorial](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) on single-node PyTorch training using Azure Machine Learning"
]
},
@@ -122,8 +122,8 @@
"\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current AmlCompute. \n",
"print(compute_target.status.serialize())"
"# use get_status() to get a detailed status for the current AmlCompute. \n",
"print(compute_target.get_status().serialize())"
]
},
{

View File

@@ -23,7 +23,7 @@
"source": [
"## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)\n",
"* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK"
@@ -124,8 +124,8 @@
"\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
]
},
{

View File

@@ -23,7 +23,7 @@
"source": [
"## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning (AML)\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)\n",
"* Review the [tutorial](https://aka.ms/aml-notebook-hyperdrive) on single-node TensorFlow training using the SDK"
@@ -124,8 +124,8 @@
"\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
]
},
{

View File

@@ -26,7 +26,7 @@
"source": [
"## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
"* Go through the [configuration notebook](../../../configuration.ipynb) notebook to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]

View File

@@ -27,7 +27,7 @@
"source": [
"## Prerequisites\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
"* Go through the [configuration notebook](../../../configuration.ipynb) notebook to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]
@@ -423,8 +423,8 @@
"\n",
"compute_target.wait_for_completion(show_output=True, min_node_count=1, timeout_in_minutes=20)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
]
},
{

View File

@@ -25,7 +25,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"* Go through the [Configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`"
"* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML `Workspace`"
]
},
{
@@ -123,8 +123,8 @@
"\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
]
},
{

View File

@@ -26,7 +26,7 @@
"\n",
"## Prerequisite:\n",
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
"* Go through the [configuration notebook](../../../configuration.ipynb) to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]
@@ -299,8 +299,8 @@
" # if no min node count is provided it uses the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
"# use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
]
},
{

View File

@@ -22,7 +22,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](../../00.configuration.ipynb) Notebook first if you haven't. Also make sure you have tqdm and matplotlib installed in the current kernel.\n",
"Make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't. Also make sure you have tqdm and matplotlib installed in the current kernel.\n",
"\n",
"```\n",
"(myenv) $ conda install -y tqdm matplotlib\n",

View File

@@ -31,7 +31,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) Notebook first if you haven't."
"Make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't."
]
},
{
@@ -119,7 +119,7 @@
"\n",
"First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.\n",
"\n",
"You can also pass a different region to check availability and then re-create your workspace in that region through the [00. Installation and Configuration](00.configuration.ipynb)"
"You can also pass a different region to check availability and then re-create your workspace in that region through the [configuration notebook](../../../configuration.ipynb)"
]
},
{

View File

@@ -29,7 +29,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
"Make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't."
]
},
{

View File

@@ -30,7 +30,7 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
"Make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't."
]
},
{
@@ -190,7 +190,7 @@
"source": [
"## Create and Attach a DSVM as a compute target\n",
"\n",
"**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.\n",
"**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.\n",
"\n",
"```shell\n",
"# create a DSVM in your resource group\n",
@@ -209,9 +209,8 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import RemoteCompute\n",
"from azureml.core.compute import ComputeTarget, RemoteCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"import os\n",
"\n",
"username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')\n",
"address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')\n",
@@ -222,13 +221,13 @@
" attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)\n",
" print('found existing:', attached_dsvm_compute.name)\n",
"except ComputeTargetException:\n",
" attached_dsvm_compute = RemoteCompute.attach(workspace=ws,\n",
" name=compute_target_name,\n",
" username=username,\n",
" address=address,\n",
" attach_config = RemoteCompute.attach_configuration(address=address,\n",
" ssh_port=22,\n",
" username=username,\n",
" private_key_file='./.ssh/id_rsa')\n",
" \n",
" attached_dsvm_compute = ComputeTarget.attach(workspace=ws,\n",
" name=compute_target_name,\n",
" attach_config=attach_config)\n",
" attached_dsvm_compute.wait_for_completion(show_output=True)"
]
},
@@ -296,7 +295,6 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Run\n",
"from azureml.core import ScriptRunConfig\n",
"\n",
"src = ScriptRunConfig(source_directory=script_folder, \n",
@@ -386,7 +384,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can choose to SSH into the VM and install Azure ML SDK, and any other missing dependencies, in that Python environment. For demonstration purposes, we simply are going to create another script `train2.py` that doesn't have azureml dependencies, and submit it instead."
"You can choose to SSH into the VM and install Azure ML SDK, and any other missing dependencies, in that Python environment. For demonstration purposes, we simply are going to use another script `train2.py` that doesn't have azureml dependencies, and submit it instead."
]
},
{
@@ -395,11 +393,11 @@
"metadata": {},
"outputs": [],
"source": [
"%%writefile $script_folder/train2.py\n",
"# copy train2.py into the script folder\n",
"shutil.copy('./train2.py', os.path.join(script_folder, 'train2.py'))\n",
"\n",
"print('####################################')\n",
"print('Hello World (without Azure ML SDK)!')\n",
"print('####################################')"
"with open(os.path.join(script_folder, './train2.py'), 'r') as training_script:\n",
" print(training_script.read())"
]
},
{
@@ -452,10 +450,6 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"\n",
"# Load the \"cpu-dsvm.runconfig\" file (created by the above attach operation) in memory\n",
"docker_run_config = RunConfiguration(framework=\"python\")\n",
"\n",

View File

@@ -0,0 +1,6 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
print('####################################')
print('Hello World (without Azure ML SDK)!')
print('####################################')

View File

@@ -57,7 +57,7 @@
"---\n",
"\n",
"## Setup\n",
"Make sure you have completed the [Configuration](..\\..\\configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.\n",
"Make sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.\n",
"\n",
"We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.\n",
"```shell\n",
@@ -78,7 +78,7 @@
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Experiment, Run, Workspace\n",
"from azureml.core import Experiment, Workspace\n",
"\n",
"# Check core SDK version number\n",
"print(\"This notebook was created using version 1.0.2 of the Azure ML SDK\")\n",
@@ -568,7 +568,6 @@
"outputs": [],
"source": [
"import requests\n",
"import json\n",
"\n",
"# use the first row from the test set again\n",
"test_samples = json.dumps({\"data\": X_test[0:1, :].tolist()})\n",
@@ -598,7 +597,6 @@
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib\n",
"import matplotlib.pyplot as plt\n",
"\n",
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0})\n",
@@ -607,13 +605,13 @@
"f.set_figheight(6)\n",
"f.set_figwidth(14)\n",
"\n",
"a0.plot(residual, 'bo', alpha=0.4);\n",
"a0.plot(residual, 'bo', alpha=0.4)\n",
"a0.plot([0,90], [0,0], 'r', lw=2)\n",
"a0.set_ylabel('residue values', fontsize=14)\n",
"a0.set_xlabel('test data set', fontsize=14)\n",
"\n",
"a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step');\n",
"a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10);\n",
"a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step')\n",
"a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10)\n",
"a1.set_yticklabels([])\n",
"\n",
"plt.show()"
@@ -686,7 +684,7 @@
}
],
"kernelspec": {
"display_name": "Python [Python 3.6]",
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},

View File

@@ -176,8 +176,8 @@
" # if no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
" # For a more detailed view of current AmlCompute status, use the 'status' property \n",
" print(compute_target.status.serialize())"
" # For a more detailed view of current AmlCompute status, use get_status()\n",
" print(compute_target.get_status().serialize())"
]
},
{

View File

@@ -581,7 +581,7 @@
"> * Deploy the model to ACI\n",
"> * Test the deployed model\n",
" \n",
"You can also try out the [regression tutorial](regression-part1-data-prep.ipynb)."
"You can also try out the [Automatic algorithm selection tutorial](03.auto-train-models.ipynb) to see how Azure Machine Learning can auto-select and tune the best algorithm for your model and build that model for you."
]
}
],

View File

@@ -26,7 +26,7 @@
"> * Explore the results\n",
"> * Register the best model\n",
"\n",
"If you dont have an Azure subscription, create a [free account](https://aka.ms/AMLfree) before you begin. \n",
"If you don\u00e2\u20ac\u2122t have an Azure subscription, create a [free account](https://aka.ms/AMLfree) before you begin. \n",
"\n",
"> Code in this article was tested with Azure Machine Learning SDK version 1.0.0\n",
"\n",
@@ -468,7 +468,7 @@
"> * Explored and reviewed training results\n",
"> * Registered the best model\n",
"\n",
"You can also try out the [image classification tutorial](img-classification-part1-training.ipynb)."
"[Deploy your model](02.deploy-models.ipynb) with Azure Machine Learning."
]
}
],