Merge pull request #14 from rastala/master

Updating notebooks
This commit is contained in:
Roope Astala
2018-09-21 16:05:25 -04:00
committed by GitHub
23 changed files with 2724 additions and 247 deletions

View File

@@ -448,8 +448,8 @@
"outputs": [],
"source": [
"models = ws.models(name='best_model')\n",
"for name, m in models.items():\n",
" print(name, m.version)"
"for m in models:\n",
" print(m.name, m.version)"
]
},
{
@@ -778,7 +778,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python [default]",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -792,7 +792,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,325 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 03. Train on Azure Container Instance (EXPERIMENTAL)\n",
"\n",
"* Create Workspace\n",
"* Create Project\n",
"* Create `train.py` in the project folder.\n",
"* Configure an ACI (Azure Container Instance) run\n",
"* Execute in ACI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create An Experiment\n",
"\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"experiment_name = 'train-on-aci'\n",
"experiment = Experiment(workspace = ws, name = experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a folder to store the training script."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"script_folder = './samples/train-on-aci'\n",
"os.makedirs(script_folder, exist_ok = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remote execution on ACI\n",
"\n",
"Use `%%writefile` magic to write training code to `train.py` file under the project folder."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $script_folder/train.py\n",
"\n",
"import os\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.model_selection import train_test_split\n",
"from azureml.core.run import Run\n",
"from sklearn.externals import joblib\n",
"\n",
"import numpy as np\n",
"\n",
"os.makedirs('./outputs', exist_ok=True)\n",
"\n",
"X, y = load_diabetes(return_X_y = True)\n",
"\n",
"run = Run.get_submitted_run()\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)\n",
"data = {\"train\": {\"X\": X_train, \"y\": y_train},\n",
" \"test\": {\"X\": X_test, \"y\": y_test}}\n",
"\n",
"# list of numbers from 0.0 to 1.0 with a 0.05 interval\n",
"alphas = np.arange(0.0, 1.0, 0.05)\n",
"\n",
"for alpha in alphas:\n",
" # Use Ridge algorithm to create a regression model\n",
" reg = Ridge(alpha = alpha)\n",
" reg.fit(data[\"train\"][\"X\"], data[\"train\"][\"y\"])\n",
"\n",
" preds = reg.predict(data[\"test\"][\"X\"])\n",
" mse = mean_squared_error(preds, data[\"test\"][\"y\"])\n",
" run.log('alpha', alpha)\n",
" run.log('mse', mse)\n",
" \n",
" model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)\n",
" with open(model_file_name, \"wb\") as file:\n",
" joblib.dump(value = reg, filename = 'outputs/' + model_file_name)\n",
"\n",
" print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure for using ACI\n",
"Linux-based ACI is available in `westus`, `eastus`, `westeurope`, `northeurope`, `westus2` and `southeastasia` regions. See details [here](https://docs.microsoft.com/en-us/azure/container-instances/container-instances-quotas#region-availability)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"configure run"
]
},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# create a new runconfig object\n",
"run_config = RunConfiguration()\n",
"\n",
"# signal that you want to use ACI to execute script.\n",
"run_config.target = \"containerinstance\"\n",
"\n",
"# ACI container group is only supported in certain regions, which can be different than the region the Workspace is in.\n",
"run_config.container_instance.region = 'eastus'\n",
"\n",
"# set the ACI CPU and Memory \n",
"run_config.container_instance.cpu_cores = 1\n",
"run_config.container_instance.memory_gb = 2\n",
"\n",
"# enable Docker \n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# set Docker base image to the default CPU-based image\n",
"run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"#run_config.environment.docker.base_image = 'microsoft/mmlspark:plus-0.9.9'\n",
"\n",
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# auto-prepare the Docker image when used for execution (if it is not already prepared)\n",
"run_config.auto_prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Submit the Experiment\n",
"Finally, run the training job on the ACI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"aci"
]
},
"outputs": [],
"source": [
"%%time \n",
"from azureml.core.script_run_config import ScriptRunConfig\n",
"\n",
"script_run_config = ScriptRunConfig(source_directory = script_folder,\n",
" script= 'train.py',\n",
" run_config = run_config)\n",
"\n",
"run = experiment.submit(script_run_config)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"remote run",
"aci"
]
},
"outputs": [],
"source": [
"%%time\n",
"# Shows output of the run on stdout.\n",
"run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"query history"
]
},
"outputs": [],
"source": [
"# Show run details\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"get metrics"
]
},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"run.get_metrics()\n",
"metrics = run.get_metrics()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(\n",
" min(metrics['mse']), \n",
" metrics['alpha'][np.argmin(metrics['mse'])]\n",
"))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,321 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 04. Train in a remote VM (MLC managed DSVM)\n",
"* Create Workspace\n",
"* Create Project\n",
"* Create `train.py` file\n",
"* Create DSVM as Machine Learning Compute (MLC) resource\n",
"* Configure & execute a run in a conda environment in the default miniconda Docker container on DSVM"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'train-on-remote-vm'\n",
"\n",
"from azureml.core import Experiment\n",
"\n",
"exp = Experiment(workspace = ws, name = experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View `train.py`\n",
"\n",
"For convenience, we created a training script for you. It is printed below as a text, but you can also run `%pfile ./train.py` in a cell to show the file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('./train.py', 'r') as training_script:\n",
" print(training_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Linux DSVM as a compute target\n",
"\n",
"**Note**: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
" \n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you switch to a different port (such as 5022), you can append the port number to the address like the example below. [Read more](../../documentation/sdk/ssh-issue.md) on this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"compute_target_name = 'mydsvm'\n",
"\n",
"try:\n",
" dsvm_compute = DsvmCompute(workspace = ws, name = compute_target_name)\n",
" print('found existing:', dsvm_compute.name)\n",
"except ComputeTargetException:\n",
" print('creating new.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = compute_target_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Attach an existing Linux DSVM as a compute target\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"'''\n",
" from azureml.core.compute import RemoteCompute \n",
" # if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase \n",
" dsvm_compute = RemoteCompute.attach(ws,name=\"attach-from-sdk6\",username=<username>,address=<ipaddress>,ssh_port=22,password=<password>)\n",
"'''"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure & Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure a Docker run with new conda environment on the VM\n",
"You can execute in a Docker container in the VM. If you choose this route, you don't need to install anything on the VM yourself. Azure ML execution service will take care of it for you."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"\n",
"# Load the \"cpu-dsvm.runconfig\" file (created by the above attach operation) in memory\n",
"run_config = RunConfiguration(framework = \"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"run_config.target = compute_target_name\n",
"\n",
"# Use Docker in the remote VM\n",
"run_config.environment.docker.enabled = True\n",
"\n",
"# Use CPU base image from DockerHub\n",
"run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"print('Base Docker image is:', run_config.environment.docker.base_image)\n",
"\n",
"# Ask system to provision a new one based on the conda_dependencies.yml file\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# Prepare the Docker and conda environment automatically when executingfor the first time.\n",
"run_config.prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the Experiment\n",
"Submit script to run in the Docker image in the remote VM. If you run this for the first time, the system will download the base image, layer in packages specified in the `conda_dependencies.yml` file on top of the base image, create a container and then execute the script in the container."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Run\n",
"from azureml.core import ScriptRunConfig\n",
"\n",
"src = ScriptRunConfig(source_directory = '.', script = 'train.py', run_config = run_config)\n",
"run = exp.submit(src)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View run history details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Find the best run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"run.get_metrics()\n",
"metrics = run.get_metrics()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"print('When alpha is {1:0.2f}, we have min MSE {0:0.2f}.'.format(\n",
" min(metrics['mse']), \n",
" metrics['alpha'][np.argmin(metrics['mse'])]\n",
"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up compute resource"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dsvm_compute.delete()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -66,7 +66,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment"
"## Create Experiment\n",
"\n",
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
]
},
{
@@ -76,18 +78,7 @@
"outputs": [],
"source": [
"experiment_name = 'train-on-remote-vm'\n",
"script_folder = './samples/train-on-remote-vm'\n",
"\n",
"import os\n",
"os.makedirs(script_folder, exist_ok = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
"\n",
"exp = Experiment(workspace = ws, name = experiment_name)"
@@ -97,9 +88,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create `train.py`\n",
"## View `train.py`\n",
"\n",
"Use `%%writefile` magic to write training code to `train.py` file under your project folder."
"For convenience, we created a training script for you. It is printed below as a text, but you can also run `%pfile ./train.py` in a cell to show the file."
]
},
{
@@ -108,46 +99,8 @@
"metadata": {},
"outputs": [],
"source": [
"%%writefile $script_folder/train.py\n",
"\n",
"import os\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.model_selection import train_test_split\n",
"from azureml.core.run import Run\n",
"from sklearn.externals import joblib\n",
"\n",
"import numpy as np\n",
"\n",
"os.makedirs('./outputs', exist_ok=True)\n",
"\n",
"X, y = load_diabetes(return_X_y = True)\n",
"\n",
"run = Run.get_submitted_run()\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)\n",
"data = {\"train\": {\"X\": X_train, \"y\": y_train},\n",
" \"test\": {\"X\": X_test, \"y\": y_test}}\n",
"\n",
"# list of numbers from 0.0 to 1.0 with a 0.05 interval\n",
"alphas = np.arange(0.0, 1.0, 0.05)\n",
"\n",
"for alpha in alphas:\n",
" # Use Ridge algorithm to create a regression model\n",
" reg = Ridge(alpha = alpha)\n",
" reg.fit(data[\"train\"][\"X\"], data[\"train\"][\"y\"])\n",
"\n",
" preds = reg.predict(data[\"test\"][\"X\"])\n",
" mse = mean_squared_error(preds, data[\"test\"][\"y\"])\n",
" run.log('alpha', alpha)\n",
" run.log('mse', mse)\n",
" \n",
" model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)\n",
" with open(model_file_name, \"wb\") as file:\n",
" joblib.dump(value = reg, filename = 'outputs/' + model_file_name)\n",
"\n",
" print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))"
"with open('./train.py', 'r') as training_script:\n",
" print(training_script.read())"
]
},
{
@@ -158,7 +111,7 @@
"\n",
"**Note**: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
" \n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you switch to a different port (such as 5022), you can append the port number to the address like the example below. [Read more](../../documentation/sdk/ssh-issue.md) on this."
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you switch to a different port (such as 5022), you can append the port number to the address like the example below."
]
},
{
@@ -267,9 +220,8 @@
"from azureml.core import Run\n",
"from azureml.core import ScriptRunConfig\n",
"\n",
"src = ScriptRunConfig(source_directory = script_folder, script = 'train.py', run_config = run_config)\n",
"run = exp.submit(src)\n",
"run.wait_for_completion(show_output = True)"
"src = ScriptRunConfig(source_directory = '.', script = 'train.py', run_config = run_config)\n",
"run = exp.submit(src)"
]
},
{
@@ -288,6 +240,15 @@
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -334,13 +295,6 @@
"source": [
"dsvm_compute.delete()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -1,10 +1,12 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
import os
from sklearn.datasets import load_diabetes
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from azureml.core import Run
from azureml.core.run import Run
from sklearn.externals import joblib
import numpy as np
@@ -15,7 +17,8 @@ X, y = load_diabetes(return_X_y=True)
run = Run.get_submitted_run()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
data = {"train": {"X": X_train, "y": y_train},
"test": {"X": X_test, "y": y_test}}

View File

@@ -0,0 +1,257 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 05. Train in Spark\n",
"* Create Workspace\n",
"* Create Experiment\n",
"* Copy relevant files to the script folder\n",
"* Configure and Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'train-on-remote-vm'\n",
"\n",
"from azureml.core import Experiment\n",
"\n",
"exp = Experiment(workspace = ws, name = experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View `train-spark.py`\n",
"\n",
"For convenience, we created a training script for you. It is printed below as a text, but you can also run `%pfile ./train-spark.py` in a cell to show the file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('train-spark.py', 'r') as training_script:\n",
" print(training_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure & Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Attach an HDI cluster\n",
"To use HDI commpute target:\n",
" 1. Create an Spark for HDI cluster in Azure. Here is some [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor, NOT CentOS.\n",
" 2. Enter the IP address, username and password below"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import HDInsightCompute\n",
"\n",
"try:\n",
" # if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase\n",
" hdi_compute_new = HDInsightCompute.attach(ws, \n",
" name=\"hdi-attach\", \n",
" address=\"hdi-ignite-demo-ssh.azurehdinsight.net\", \n",
" ssh_port=22, \n",
" username='<username>', \n",
" password='<password>')\n",
"\n",
"except UserErrorException as e:\n",
" print(\"Caught = {}\".format(e.message))\n",
" print(\"Compute config already attached.\")\n",
" \n",
" \n",
"hdi_compute_new.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure HDI run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"\n",
"# Load the \"cpu-dsvm.runconfig\" file (created by the above attach operation) in memory\n",
"run_config = RunConfiguration(framework = \"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"run_config.target = hdi_compute.name\n",
"\n",
"# Use Docker in the remote VM\n",
"# run_config.environment.docker.enabled = True\n",
"\n",
"# Use CPU base image from DockerHub\n",
"# run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"# print('Base Docker image is:', run_config.environment.docker.base_image)\n",
"\n",
"# Ask system to provision a new one based on the conda_dependencies.yml file\n",
"run_config.environment.python.user_managed_dependencies = False\n",
"\n",
"# Prepare the Docker and conda environment automatically when executingfor the first time.\n",
"# run_config.prepare_environment = True\n",
"\n",
"# specify CondaDependencies obj\n",
"# run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
"# load the runconfig object from the \"myhdi.runconfig\" file generated by the attach operaton above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the script to HDI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"script_run_config = ScriptRunConfig(source_directory = '.',\n",
" script= 'train-spark.py',\n",
" run_config = run_config)\n",
"run = experiment.submit(script_run_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get the URL of the run history web page\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"metrics = run.get_metrics()\n",
"print(metrics)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -74,11 +74,7 @@
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'train-on-remote-vm'\n",
"script_folder = './samples/train-on-remote-vm'\n",
"\n",
"import os\n",
"os.makedirs(script_folder, exist_ok = True)\n",
"experiment_name = 'train-on-spark'\n",
"\n",
"from azureml.core import Experiment\n",
"\n",
@@ -89,10 +85,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Copy files\n",
"## View `train-spark.py`\n",
"\n",
"\n",
"Copy `train-spark.py` and `iris.csv` into the project folde"
"For convenience, we created a training script for you. It is printed below as a text, but you can also run `%pfile ./train-spark.py` in a cell to show the file."
]
},
{
@@ -101,31 +96,8 @@
"metadata": {},
"outputs": [],
"source": [
"from shutil import copyfile\n",
"\n",
"# copy iris dataset in to project folder\n",
"copyfile('iris.csv', os.path.join(script_folder, 'iris.csv'))\n",
"\n",
"# copy train-spark.py file into project folder\n",
"# train-spark.py trains a simple LogisticRegression model using Spark.ML algorithm\n",
"copyfile('train-spark.py', os.path.join(script_folder, 'train-spark.py'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Review the train-spark.py file in the project folder."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open(os.path.join(project_folder, 'train-spark.py'), 'r') as fin:\n",
" print(fin.read())"
"with open('train-spark.py', 'r') as training_script:\n",
" print(training_script.read())"
]
},
{
@@ -224,12 +196,10 @@
"metadata": {},
"outputs": [],
"source": [
"script_run_config = ScriptRunConfig(source_directory = project.project_directory,\n",
"script_run_config = ScriptRunConfig(source_directory = '.',\n",
" script= 'train-spark.py',\n",
" run_config = run_config)\n",
"run = experiment.submit(script_run_config)\n",
"\n",
"run.wait_for_completion(show_output = True)"
"run = experiment.submit(script_run_config)"
]
},
{
@@ -239,7 +209,16 @@
"outputs": [],
"source": [
"# get the URL of the run history web page\n",
"print(helpers.get_run_history_url(run))"
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output = True)"
]
},
{
@@ -256,7 +235,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python [default]",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -270,7 +249,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.5"
}
},
"nbformat": 4,

View File

@@ -1,3 +1,5 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
import numpy as np
import pyspark

View File

@@ -218,7 +218,7 @@
"source": [
"'''\n",
"# Use the default configuration (can also provide parameters to customize)\n",
"resource_id = '/subscriptions/<subscriptin id>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aks-clsuter-name>'\n",
"resource_id = '/subscriptions/92c76a2f-0e1c-4216-b65e-abf7a3f34c1e/resourcegroups/raymondsdk0604/providers/Microsoft.ContainerService/managedClusters/my-aks-0605d37425356b7d01'\n",
"\n",
"create_name='my-existing-aks' \n",
"# Create the cluster\n",

View File

@@ -204,7 +204,7 @@
"description = 'AutoML Model'\n",
"tags = None\n",
"model = local_run.register_model(description=description, tags=tags, iteration=8)\n",
"local_run.model_id # Use this id to deploy the model as a web service in Azure"
"local_run.model_id # This will be written to the script file later in the notebook."
]
},
{
@@ -230,7 +230,7 @@
"\n",
"def init():\n",
" global model\n",
" model_path = Model.get_model_path(model_name = 'AutoMLbcfe9c23e8') # this name is model.id of model that we want to deploy\n",
" model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy\n",
" # deserialize the model file back into a sklearn model\n",
" model = joblib.load(model_path)\n",
"\n",
@@ -290,13 +290,6 @@
" print('{}\\t{}'.format(p, dependencies[p]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then copy the version "
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -311,12 +304,34 @@
" - pip:\n",
" - numpy==1.14.2\n",
" - scikit-learn==0.19.2\n",
" - --index-url https://azuremlsdktestpypi.azureedge.net/sdk-release/Preview/E7501C02541B433786111FE8E140CAA1\n",
" - --extra-index-url https://pypi.python.org/simple\n",
" - azureml-requirements\n",
" - azureml-train-automl==0.1.50\n",
" - azureml-sdk==0.1.50\n",
" - azureml-core==0.1.50"
" - azureml-sdk[notebooks,automl]==<<azureml-version>> "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Substitute the actual version number in the environment file.\n",
"\n",
"conda_env_file_name = 'myenv.yml'\n",
"\n",
"with open(conda_env_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(conda_env_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<azureml-version>>', dependencies['azureml-sdk']))\n",
"\n",
"# Substitute the actual model id in the script file.\n",
"\n",
"script_file_name = 'score.py'\n",
"\n",
"with open(script_file_name, 'r') as cefr:\n",
" content = cefr.read()\n",
"\n",
"with open(script_file_name, 'w') as cefw:\n",
" cefw.write(content.replace('<<modelid>>', local_run.model_id))"
]
},
{
@@ -335,8 +350,8 @@
"from azureml.core.image import Image, ContainerImage\n",
"\n",
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
" execution_script=\"score.py\",\n",
" conda_file=\"myenv.yml\",\n",
" execution_script = script_file_name,\n",
" conda_file = conda_env_file_name,\n",
" tags = {'area': \"digits\", 'type': \"automl_classification\"},\n",
" description = \"Image for automl classification sample\")\n",
"\n",

View File

@@ -46,7 +46,7 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install --upgrade --extra-index-url https://dataprepdownloads.azureedge.net/pypi/autoML-BD0E9CABED27C837/0.1.1809.11043 azureml-dataprep --no-cache-dir --force-reinstall\n",
"!pip install azureml-dataprep\n",
"!pip install tornado==4.5.1"
]
},
@@ -56,7 +56,7 @@
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases"
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
@@ -66,7 +66,7 @@
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics=True)"
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
@@ -112,7 +112,7 @@
"# project folder\n",
"project_folder = './sample_projects/automl-dataprep-classification'\n",
" \n",
"experiment=Experiment(ws, experiment_name)\n",
"experiment = Experiment(ws, experiment_name)\n",
" \n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
@@ -144,10 +144,10 @@
"simple_example_data_root = 'https://dprepdata.blob.core.windows.net/automl-notebook-data/'\n",
"X = dprep.smart_read_file(simple_example_data_root + 'X.csv').skip(1) # remove header\n",
"\n",
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter) \n",
"# and convert column types manually. \n",
"# You can also use `read_csv` and `to_*` transformations to read (with overridable delimiter).\n",
"# and convert column types manually.\n",
"# Here we read a comma delimited file and convert all columns to integers.\n",
"y = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex=True))"
"y = dprep.read_csv(simple_example_data_root + 'y.csv').to_long(dprep.ColumnSelector(term='.*', use_regex = True))"
]
},
{
@@ -218,7 +218,7 @@
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" X = X,\n",
" y = y, \n",
" y = y,\n",
" **automl_settings)"
]
},
@@ -235,7 +235,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remote Run"
"## Remote Run\n",
"*Note: This feature might not work properly in your workspace region before the October update. You may jump to the \"Exploring the results\" section below to explore other features AutoML and DataPrep has to offer.*"
]
},
{
@@ -278,12 +279,6 @@
"outputs": [],
"source": [
"cd = CondaDependencies()\n",
"cd.set_pip_index_url(index_url=\"--index-url https://azuremlsdktestpypi.azureedge.net/sdk-release/master/588E708E0DF342C4A80BD954289657CF\")\n",
"cd.set_pip_index_url(index_url=\"--extra-index-url https://dataprepdownloads.azureedge.net/pypi/autoML-BD0E9CABED27C837/0.1.1809.11043 --extra-index-url https://pypi.python.org/simple\")\n",
"cd.remove_pip_package(pip_package=\"azureml-defaults\")\n",
"cd.add_pip_package(pip_package='azureml-core')\n",
"cd.add_pip_package(pip_package='azureml-telemetry')\n",
"cd.add_pip_package(pip_package='azureml-train-automl')\n",
"cd.add_pip_package(pip_package='azureml-dataprep')\n",
"cd.add_pip_package(pip_package='tornado==4.5.1')"
]
@@ -322,13 +317,15 @@
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path=project_folder,\n",
" run_configuration = run_config,\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)\n",
"remote_run = experiment.submit(automl_config, show_output=True)"
" debug_log = 'automl_errors.log',\n",
" path = project_folder,\n",
" run_configuration = run_config,\n",
" X = X,\n",
" y = y,\n",
" **automl_settings)\n",
"# Please uncomment the line below to try out remote run with dataprep. \n",
"# This feature might not work properly in your workspace region before the October update.\n",
"# remote_run = experiment.submit(automl_config, show_output = True)"
]
},
{
@@ -363,8 +360,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve All Child Runs\n",
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
"#### Retrieve all child runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
@@ -377,7 +374,7 @@
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
" \n",
"import pandas as pd\n",
@@ -541,8 +538,8 @@
"source": [
"digits_complete.to_pandas_dataframe().shape\n",
"labels_column = 'Column64'\n",
"dflow_X = digits_complete.drop_columns(columns=[labels_column])\n",
"dflow_y = digits_complete.keep_columns(columns=[labels_column])"
"dflow_X = digits_complete.drop_columns(columns = [labels_column])\n",
"dflow_y = digits_complete.keep_columns(columns = [labels_column])"
]
}
],
@@ -562,7 +559,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.5"
}
},
"nbformat": 4,

View File

@@ -21,10 +21,15 @@
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Instantiating AutoMLConfig\n",
"2. Instantiating AutoMLConfig which enables an extra ensembling iteration\n",
"3. Training the Model using local compute\n",
"4. Exploring the results\n",
"5. Testing the fitted model\n"
"5. Testing the fitted model\n",
"\n",
"\n",
"** <b>Disclaimers / Limitations</b> **\n",
" - currently only local compute is supported for the ensembling iteration; support for remote compute will be coming soon\n",
" - currently only Train/Validation split is supported; support for cross-validation will be coming soon"
]
},
{
@@ -206,18 +211,6 @@
"local_run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = local_run.continue_experiment(X = X_digits, \n",
" y = y_digits, \n",
" show_output = True,\n",
" iterations = 5)"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -21,10 +21,15 @@
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment using an existing Workspace\n",
"2. Instantiating AutoMLConfig\n",
"2. Instantiating AutoMLConfig which enables an extra ensembling iteration\n",
"3. Training the Model using local compute\n",
"4. Exploring the results\n",
"5. Testing the fitted model"
"5. Testing the fitted model\n",
"\n",
"\n",
"** <b>Disclaimers / Limitations</b> **\n",
"- currently only local compute is supported for the ensembling iteration; support for remote compute will be coming soon\n",
"- currently only Train/Validation split is supported; support for cross-validation will be coming soon"
]
},
{

View File

@@ -6,12 +6,12 @@
5. [Running using python command](#pythoncommand)
6. [Troubleshooting](#troubleshooting)
# Automated machine learning introduction <a name="introduction"></a>
Automated machine learning (automated ML) builds high quality machine learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, automated ML will give you a high quality machine learning model that you can use for predictions.
# Auto ML Introduction <a name="introduction"></a>
AutoML builds high quality Machine Learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, AutoML will give you a high quality machine learning model that you can use for predictions.
If you are new to Data Science, automated ML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
If you are new to Data Science, AutoML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
If you are an experienced data scientist, automated ML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. automated ML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
# Running samples in a Local Conda environment <a name="localconda"></a>
@@ -25,7 +25,7 @@ It is best if you create a new conda environment locally to try this SDK, so it
There's no need to install mini-conda specifically.
### 2. Downloading the sample notebooks
- Download the sample notebooks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) as zip and extract the contents to a local directory. The automated ML sample notebooks are in the "automl" folder.
- Download the sample notebooks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) as zip and extract the contents to a local directory. The AutoML sample notebooks are in the "automl" folder.
### 3. Setup a new conda environment
The **automl/automl_setup** script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook.
@@ -58,7 +58,7 @@ automl_setup_linux.sh
### 5. Running Samples
- Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks.
- Follow the instructions in the individual notebooks to explore various features in automated ML
- Follow the instructions in the individual notebooks to explore various features in AutoML
# Auto ML SDK Sample Notebooks <a name="samples"></a>
- [00.configuration.ipynb](00.configuration.ipynb)
@@ -113,8 +113,8 @@ automl_setup_linux.sh
- [07.auto-ml-exploring-previous-runs.ipynb](07.auto-ml-exploring-previous-runs)
- List all projects for the workspace
- List all automated ML Runs for a given project
- Get details for a automated ML Run. (Automl settings, run widget & all metrics)
- List all AutoML Runs for a given project
- Get details for a AutoML Run. (Automl settings, run widget & all metrics)
- Downlaod fitted pipeline for any iteration
- [08.auto-ml-remote-execution-with-text-file-on-DSVM](08.auto-ml-remote-execution-with-text-file-on-DSVM.ipynb)
@@ -151,11 +151,12 @@ automl_setup_linux.sh
# Documentation <a name="documentation"></a>
## Table of Contents
1. [Automated ML Settings ](#automlsettings)
1. [Auto ML Settings ](#automlsettings)
2. [Cross validation split options](#cvsplits)
3. [Get Data Syntax](#getdata)
4. [Data pre-processing and featurization](#preprocessing)
## Automated ML Settings <a name="automlsettings"></a>
## Auto ML Settings <a name="automlsettings"></a>
|Property|Description|Default|
|-|-|-|
|**primary_metric**|This is the metric that you want to optimize.<br><br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i><br><br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>| Classification: accuracy <br><br> Regression: spearman_correlation
@@ -195,6 +196,20 @@ The *get_data()* function can be used to return a dictionary with these values:
|columns|Array of strings|data_train||*Optional* Whitelist of columns to use for features|
|cv_splits_indices|Array of integers|data_train||*Optional* List of indexes to split the data for cross validation|
## Data pre-processing and featurization <a name="preprocessing"></a>
If you use "preprocess=True", the following data preprocessing steps are performed automatically for you:
### 1. Dropping high cardinality or no variance features
- Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs).
### 2. Missing value imputation
- For numerical features, missing values are imputed with average of values in the column.
- For categorical features, missing values are imputed with most frequent value.
### 3. Generating additional features
- For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.
- For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer.
### 4. Transformations and encodings
- Numeric features with very few unique values are transformed into categorical features.
- Depending on cardinality of categorical features label encoding or (hashing) one-hot encoding is performed.
# Running using python command <a name="pythoncommand"></a>
Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file.
You can then run this file using the python command.
@@ -207,7 +222,7 @@ The main code of the file must be indented so that it is under this condition.
# Troubleshooting <a name="troubleshooting"></a>
## Iterations fail and the log contains "MemoryError"
This can be caused by insufficient memory on the DSVM. Automated ML loads all training data into memory. So, the available memory should be more than the training data size.
This can be caused by insufficient memory on the DSVM. AutoML loads all training data into memory. So, the available memory should be more than the training data size.
If you are using a remote DSVM, memory is needed for each concurrent iteration. The concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and concurrent_iterations is set to 10, the minimum memory required is at least 80Gb.
To resolve this issue, allocate a DSVM with more memory or reduce the value specified for concurrent_iterations.

View File

@@ -1,30 +0,0 @@
{
"channels": {
"master": [
"sample-01",
"sample-02"
],
"candidate": [
"sample-01",
"sample-02"
],
"preview": [
"sample-01",
"sample-02"
]
},
"notebooks": {
"sample-01": {
"name": "onnx-inference-mnist.ipynb",
"widgets": [ "azureml.train.widgets" ],
"dependencies": [],
"requirements": [ "matplotlib", "numpy", "onnx"]
},
"sample-02": {
"name": "onnx-inference-emotion-recognition.ipynb",
"widgets": ["azureml.train.widgets"],
"dependencies": [],
"requirements": [ "matplotlib", "numpy", "onnx"]
}
}
}

View File

@@ -388,7 +388,7 @@
"source": [
"from azureml.contrib.brainwave.pipeline import ModelDefinition, TensorflowStage, BrainWaveStage\n",
"\n",
"model_def_path = os.path.join(save_path, 'model_def.zip')\n",
"model_def_path = os.path.join(saved_model_dir, 'model_def.zip')\n",
"\n",
"model_def = ModelDefinition()\n",
"model_def.pipeline.append(TensorflowStage(sess, in_images, image_tensors))\n",
@@ -609,7 +609,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
"version": "3.5.2"
}
},
"nbformat": 4,

View File

@@ -125,7 +125,7 @@
"from azureml.contrib.brainwave.pipeline import ModelDefinition, TensorflowStage, BrainWaveStage\n",
"\n",
"save_path = os.path.expanduser('~/models/save')\n",
"model_def_path = os.path.join(save_path, 'service_def.zip')\n",
"model_def_path = os.path.join(save_path, 'model_def.zip')\n",
"\n",
"model_def = ModelDefinition()\n",
"with tf.Session() as sess:\n",
@@ -301,7 +301,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
"version": "3.5.2"
}
},
"nbformat": 4,

View File

@@ -559,7 +559,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
"version": "3.5.2"
}
},
"nbformat": 4,

View File

@@ -251,7 +251,7 @@
"try:\n",
" # look for the existing cluster by name\n",
" compute_target = ComputeTarget(workspace=ws, name=batchai_cluster_name)\n",
" if compute_target is BatchAiCompute:\n",
" if type(compute_target) is BatchAiCompute:\n",
" print('found compute target {}, just use it.'.format(batchai_cluster_name))\n",
" else:\n",
" print('{} exists but it is not a Batch AI cluster. Please choose a different name.'.format(batchai_cluster_name))\n",

View File

@@ -98,7 +98,7 @@
"source": [
"### Create experiment\n",
"\n",
"Create an experiment to track the runs in your workspace. A workspace can have muliple experiments; an experiment must belongn to a workspace."
"Create an experiment to track the runs in your workspace. A workspace can have muliple experiments. "
]
},
{
@@ -121,9 +121,7 @@
"\n",
"Azure Azure ML Managed Compute is a managed service that enables data scientists to train machine learning models on clusters of Azure virtual machines, including VMs with GPU support. In this tutorial, you create an Azure Managed Compute cluster as your training environment. This code creates a cluster for you if it does not already exist in your workspace. \n",
"\n",
" **Creation of the cluster takes approximately 5 minutes.** If the cluster is already in the workspace this code uses it and skips the creation process.\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for eg. BatchAI cluster size) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
" **Creation of the cluster takes approximately 5 minutes.** If the cluster is already in the workspace this code uses it and skips the creation process."
]
},
{
@@ -146,7 +144,7 @@
"try:\n",
" # look for the existing cluster by name\n",
" compute_target = ComputeTarget(workspace=ws, name=batchai_cluster_name)\n",
" if compute_target is BatchAiCompute:\n",
" if type(compute_target) is BatchAiCompute:\n",
" print('found compute target {}, just use it.'.format(batchai_cluster_name))\n",
" else:\n",
" print('{} exists but it is not a Batch AI cluster. Please choose a different name.'.format(batchai_cluster_name))\n",
@@ -188,6 +186,13 @@
"Download the MNIST dataset and save the files into a `data` directory locally. Images and labels for both training and testing are downloaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
@@ -330,7 +335,7 @@
"\n",
"### Create a directory\n",
"\n",
"Create a directory to hold all script files are other assets."
"Create a directory to deliver the necessary code from your computer to the remote resource."
]
},
{
@@ -434,7 +439,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Copy the utility library that loads the dataset into the script folder to be accessed by the training script."
"The file `utils.py` is referenced from the training script to load the dataset correctly. Copy this script into the script folder so that it can be accessed along with the training script on the remote resource."
]
},
{
@@ -457,11 +462,12 @@
"\n",
"* The name of the estimator object, `est`\n",
"* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. \n",
"* The compute target. In this case you will use the Managed Compute cluster you created\n",
"* The compute target. In this case you will use the Batch AI cluster you created\n",
"* The training script name, train.py\n",
"* The `data-folder` parameter used by the training script to access the data\n",
"* Any Python packages needed for training\n",
"In this tutorial, this target is the Managed Compute cluster. All files in the script folder are uploaded into the cluster nodes for execution."
"* Parameters required from the training script \n",
"* Python packages needed for training\n",
"\n",
"In this tutorial, this target is the Batch AI cluster. All files in the script folder are uploaded into the cluster nodes for execution. The data_folder is set to use the datastore (`ds.as_mount()`)."
]
},
{
@@ -507,7 +513,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Since the call is asynchronous, it returns a **Preparing** or **running** state as soon as the job is started.\n",
"Since the call is asynchronous, it returns a **Preparing** or **Running** state as soon as the job is started.\n",
"\n",
"## Monitor a remote run\n",
"\n",
@@ -595,7 +601,7 @@
"\n",
"## Register model\n",
"\n",
"The last step in the training script wrote the file `outputs/sklearn_mnist_model.pkl` in a folder named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special folder in that all content in the `outputs` directory is automatically uploaded as part of the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace. \n",
"The last step in the training script wrote the file `outputs/sklearn_mnist_model.pkl` in a directory named `outputs` in the VM of the cluster where the job is executed. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. Hence, the model file is now also available in your workspace.\n",
"\n",
"You can see files associated with that run."
]

View File

@@ -15,7 +15,7 @@
"source": [
"# Tutorial #2: Deploy an image classification model in Azure Container Instance (ACI)\n",
"\n",
"This tutorial is **part two of a two-part tutorial series**. In the [previous tutorial](01.train-models.ipynb), you trained machine learning models and then registered the best one in your workspace on the cloud. \n",
"This tutorial is **part two of a two-part tutorial series**. In the [previous tutorial](01.train-models.ipynb), you trained machine learning models and then registered a model in your workspace on the cloud. \n",
"\n",
"Now, you're ready to deploy the model as a web service in [Azure Container Instances](https://docs.microsoft.com/azure/container-instances/) (ACI). A web service is an image, in this case a Docker image, that encapsulates the scoring logic and the model itself. \n",
"\n",
@@ -33,8 +33,7 @@
"## Prerequisites\n",
"\n",
"Complete the model training in the [Tutorial #1: Train an image classification model with Azure Machine Learning](01.train-models.ipynb) notebook. \n",
"\n",
"If you did NOT complete the tutorial, you can instead run this cell to create a model and download the data needed for this tutorial:"
"\n"
]
},
{
@@ -43,6 +42,8 @@
"metadata": {},
"outputs": [],
"source": [
"# If you did NOT complete the tutorial, you can instead run this cell \n",
"# This will register a model and download the data needed for this tutorial\n",
"# These prerequisites are created in the training tutorial\n",
"# Feel free to skip this cell if you completed the training tutorial \n",
"\n",
@@ -251,9 +252,9 @@
"Create the scoring script, called score.py, used by the web service call to show how to use the model.\n",
"\n",
"You must include two required functions into the scoring script:\n",
"* The `init()` function, which typically loads the model into a global object. This function is executed only once when the Docker container is started. \n",
"* The `init()` function, which typically loads the model into a global object. This function is run only once when the Docker container is started. \n",
"\n",
"* The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats are supported."
"* The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats are supported.\n"
]
},
{
@@ -332,7 +333,7 @@
"source": [
"### Create configuration file\n",
"\n",
"Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. While it depends on your model, the default of 1 core and 1 gigabyte of RAM is usually sufficient for many models. If you feel you need more later, you can always modify the configuration and redeploy the service."
"Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. While it depends on your model, the default of 1 core and 1 gigabyte of RAM is usually sufficient for many models. If you feel you need more later, you would have to recreate the image and redeploy the service."
]
},
{

View File

@@ -13,13 +13,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial: Automatically train a classification model with Azure Automated Machine Learning\n",
"# Tutorial: Train a classification model with automated machine learning\n",
"\n",
"In this tutorial, you'll learn how to automatically generate a machine learning model. This model can then be deployed following the workflow in the [Deploy a model](02.deploy-models.ipynb) tutorial.\n",
"In this tutorial, you'll learn how to generate a machine learning model using automated machine learning (automated ML). Azure Machine Learning can perform data preprocessing, algorithm selection and hyperparameter selection in an automated way for you. The final model can then be deployed following the workflow in the [Deploy a model](02.deploy-models.ipynb) tutorial.\n",
"\n",
"[flow diagram](./imgs/flow2.png)\n",
"\n",
"Similar to the [train models tutorial](01.train-models.ipynb), this tutorial classifies handwritten images of digits (0-9) from the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset.\n",
"Similar to the [train models tutorial](01.train-models.ipynb), this tutorial classifies handwritten images of digits (0-9) from the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. But this time you don't to specify an algorithm or tune hyperparameters. The automated ML technique iterates over many combinations of algorithms and hyperparameters until it finds the best model based on your criterion.\n",
"\n",
"You'll learn how to:\n",
"\n",
@@ -190,11 +190,10 @@
"|**primary_metric**|AUC Weighted | Metric that you want to optimize.|\n",
"|**max_time_sec**|12,000|Time limit in seconds for each iteration|\n",
"|**iterations**|20|Number of iterations. In each iteration, the model trains with the data with a specific pipeline|\n",
"|**n_cross_validations**|5|Number of cross validation splits|\n",
"|**n_cross_validations**|3|Number of cross validation splits|\n",
"|**preprocess**|True| *True/False* Enables experiment to perform preprocessing on the input. Preprocessing handles *missing data*, and performs some common *feature extraction*|\n",
"|**exit_score**|0.994|*double* value indicating the target for *primary_metric*. Once the target is surpassed the run terminates|\n",
"|**blacklist_algos**|['kNN','LinearSVM']|*Array* of *strings* indicating algorithms to ignore.\n",
"|**concurrent_iterations**|5|Max number of iterations that would be executed in parallel. This number should be less than the number of cores on the DSVM. Used in remote training.|"
"|**exit_score**|0.995|*double* value indicating the target for *primary_metric*. Once the target is surpassed the run terminates|\n",
"|**blacklist_algos**|['kNN','LinearSVM']|*Array* of *strings* indicating algorithms to ignore.\n"
]
},
{
@@ -211,6 +210,8 @@
" max_time_sec = 12000,\n",
" iterations = 20,\n",
" n_cross_validations = 3,\n",
" preprocess = True,\n",
" exit_score = 0.995,\n",
" blacklist_algos = ['kNN','LinearSVM'],\n",
" X = X_digits,\n",
" y = y_digits,\n",
@@ -279,7 +280,7 @@
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"import pandas as pd\n",
@@ -287,6 +288,15 @@
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the best model \n",
"\n",
"Use the `local_run` object to get the best model and register it into the workspace. "
]
},
{
"cell_type": "code",
"execution_count": null,