Notebook folder restructuring

This commit is contained in:
Roope Astala
2018-11-29 11:32:16 -05:00
parent 613db3158d
commit 96523ec751
121 changed files with 81594 additions and 12736 deletions

View File

@@ -1,313 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 00. Installation and configuration\n",
"This notebook configures your library of notebooks to connect to an Azure Machine Learning Workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook to use an existing workspace or create a new workspace.\n",
"\n",
"## What is an Azure ML Workspace and why do I need one?\n",
"\n",
"An AML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an AML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Access Azure Subscription\n",
"\n",
"In order to create an AML Workspace, first you need access to an Azure Subscription. You can [create your own](https://azure.microsoft.com/en-us/free/) or get your existing subscription information from the [Azure portal](https://portal.azure.com).\n",
"\n",
"### 2. If you're running on your own local environment, install Azure ML SDK and other libraries\n",
"\n",
"If you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.\n",
"\n",
"Also install following libraries to your environment. Many of the example notebooks depend on them\n",
"\n",
"```\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"Once installation is complete, check the Azure ML SDK version:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Make sure your subscription is registered to use ACI\n",
"Azure Machine Learning makes use of Azure Container Instance (ACI). You need to ensure your subscription has been registered to use ACI in order be able to deploy a dev/test web service. If you have run through the quickstart experience you have already performed this step. Otherwise you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands.\n",
"\n",
"```shell\n",
"# check to see if ACI is already registered\n",
"(myenv) $ az provider show -n Microsoft.ContainerInstance -o table\n",
"\n",
"# if ACI is not registered, run this command.\n",
"# note you need to be the subscription owner in order to execute this command successfully.\n",
"(myenv) $ az provider register -n Microsoft.ContainerInstance\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up your Azure Machine Learning workspace\n",
"\n",
"### Option 1: You have workspace already\n",
"If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.\n",
"\n",
"If you have a workspace created another way, [these instructions](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#create-workspace-configuration-file) describe how to get your subscription and workspace information.\n",
"\n",
"If this cell succeeds, you're done configuring this library! Otherwise continue to follow the instructions in the rest of the notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", \"<my-subscription-id>\")\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", \"<my-resource-group>\")\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", \"<my-workspace-name>\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"try:\n",
" ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)\n",
" ws.write_config()\n",
" print('Workspace configuration succeeded. You are all set!')\n",
"except:\n",
" print('Workspace not found. Run the cells below.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Option 2: You don't have workspace yet\n",
"\n",
"\n",
"#### Requirements\n",
"\n",
"Inside your Azure subscription, you will need access to a _resource group_, which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"To create or access an Azure ML Workspace, you will need to import the AML library and the following information:\n",
"* A name for your workspace\n",
"* Your subscription id\n",
"* The resource group name\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Supported Azure Regions\n",
"Specify a region where your workspace will be located from the list of [Azure Machine Learning regions](https://linktoregions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", \"<my-subscription-id>\")\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", \"my-aml-resource-group\")\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", \"my-first-workspace\")\n",
"\n",
"workspace_region = os.environ.get(\"WORKSPACE_REGION\", \"eastus2\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Create the workspace\n",
"This cell will create an AML workspace for you in a subscription provided you have the correct permissions.\n",
"\n",
"This will fail when:\n",
"1. You do not have permission to create a workspace in the resource group\n",
"2. You do not have permission to create a resource group if it's non-existing.\n",
"2. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"# import the Workspace class and check the azureml SDK version\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" create_resource_group = True,\n",
" exist_ok = True)\n",
"ws.get_details()\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create compute resources for your training experiments\n",
"\n",
"Many of the subsequent examples use Azure Machine Learning managed compute (AmlCompute) to train models at scale. To create a **CPU** cluster now, run the cell below. The autoscale settings mean that the cluster will scale down to 0 nodes when inactive and up to 4 nodes when busy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpucluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',\n",
" max_nodes=4)\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"cpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To create a **GPU** cluster, run the cell below. Note that your subscription must have sufficient quota for GPU VMs or the command will fail. To increase quota, see [these instructions](https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your GPU cluster\n",
"gpu_cluster_name = \"gpucluster\"\n",
"\n",
"# Check if cluster exists already\n",
"try:\n",
" gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',\n",
" max_nodes=4)\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)\n",
"\n",
"gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the rest of the sample notebooks. A good place to start is the [01.train-model tutorial](./tutorials/01.train-model.ipynb) to learn how to train and then deploy an image classification model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,331 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 05. Train in Spark\n",
"* Create Workspace\n",
"* Create Experiment\n",
"* Copy relevant files to the script folder\n",
"* Configure and Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check core SDK version number\n",
"import azureml.core\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"experiment_name = 'train-on-spark'\n",
"\n",
"from azureml.core import Experiment\n",
"exp = Experiment(workspace=ws, name=experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## View `train-spark.py`\n",
"\n",
"For convenience, we created a training script for you. It is printed below as a text, but you can also run `%pfile ./train-spark.py` in a cell to show the file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open('train-spark.py', 'r') as training_script:\n",
" print(training_script.read())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure & Run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure an ACI run\n",
"Before you try running on an actual Spark cluster, you can use a Docker image with Spark already baked in, and run it in ACI(Azure Container Registry)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# use pyspark framework\n",
"aci_run_config = RunConfiguration(framework=\"pyspark\")\n",
"\n",
"# use ACI to run the Spark job\n",
"aci_run_config.target = 'containerinstance'\n",
"aci_run_config.container_instance.region = 'eastus2'\n",
"aci_run_config.container_instance.cpu_cores = 1\n",
"aci_run_config.container_instance.memory_gb = 2\n",
"\n",
"# specify base Docker image to use\n",
"aci_run_config.environment.docker.enabled = True\n",
"aci_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_MMLSPARK_CPU_IMAGE\n",
"\n",
"# specify CondaDependencies\n",
"cd = CondaDependencies()\n",
"cd.add_conda_package('numpy')\n",
"aci_run_config.environment.python.conda_dependencies = cd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit script to ACI to run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import ScriptRunConfig\n",
"\n",
"script_run_config = ScriptRunConfig(source_directory = '.',\n",
" script= 'train-spark.py',\n",
" run_config = aci_run_config)\n",
"run = exp.submit(script_run_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note** you can also create a new VM, or attach an existing VM, and use Docker-based execution to run the Spark job. Please see the `04.train-in-vm` for example on how to configure and run in Docker mode in a VM."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Attach an HDI cluster\n",
"Now we can use a real Spark cluster, HDInsight for Spark, to run this job. To use HDI commpute target:\n",
" 1. Create a Spark for HDI cluster in Azure. Here are some [quick instructions](https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-jupyter-spark-sql). Make sure you use the Ubuntu flavor, NOT CentOS.\n",
" 2. Enter the IP address, username and password below"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import HDInsightCompute\n",
"from azureml.exceptions import ComputeTargetException\n",
"\n",
"try:\n",
" # if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase\n",
" hdi_compute = HDInsightCompute.attach(workspace=ws, \n",
" name=\"myhdi\", \n",
" address=\"<myhdi-ssh>.azurehdinsight.net\", \n",
" ssh_port=22, \n",
" username='<ssh-username>', \n",
" password='<ssh-pwd>')\n",
"\n",
"except ComputeTargetException as e:\n",
" print(\"Caught = {}\".format(e.message))\n",
" \n",
" \n",
"hdi_compute.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure HDI run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"\n",
"# use pyspark framework\n",
"hdi_run_config = RunConfiguration(framework=\"pyspark\")\n",
"\n",
"# Set compute target to the HDI cluster\n",
"hdi_run_config.target = hdi_compute.name\n",
"\n",
"# specify CondaDependencies object to ask system installing numpy\n",
"cd = CondaDependencies()\n",
"cd.add_conda_package('numpy')\n",
"hdi_run_config.environment.python.conda_dependencies = cd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the script to HDI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import ScriptRunConfig\n",
"\n",
"script_run_config = ScriptRunConfig(source_directory = '.',\n",
" script= 'train-spark.py',\n",
" run_config = hdi_run_config)\n",
"run = exp.submit(config=script_run_config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get the URL of the run history web page\n",
"run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get all metris logged in the run\n",
"metrics = run.get_metrics()\n",
"print(metrics)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "aashishb"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,150 +0,0 @@
5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
4.6,3.4,1.4,0.3,Iris-setosa
5.0,3.4,1.5,0.2,Iris-setosa
4.4,2.9,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
5.4,3.7,1.5,0.2,Iris-setosa
4.8,3.4,1.6,0.2,Iris-setosa
4.8,3.0,1.4,0.1,Iris-setosa
4.3,3.0,1.1,0.1,Iris-setosa
5.8,4.0,1.2,0.2,Iris-setosa
5.7,4.4,1.5,0.4,Iris-setosa
5.4,3.9,1.3,0.4,Iris-setosa
5.1,3.5,1.4,0.3,Iris-setosa
5.7,3.8,1.7,0.3,Iris-setosa
5.1,3.8,1.5,0.3,Iris-setosa
5.4,3.4,1.7,0.2,Iris-setosa
5.1,3.7,1.5,0.4,Iris-setosa
4.6,3.6,1.0,0.2,Iris-setosa
5.1,3.3,1.7,0.5,Iris-setosa
4.8,3.4,1.9,0.2,Iris-setosa
5.0,3.0,1.6,0.2,Iris-setosa
5.0,3.4,1.6,0.4,Iris-setosa
5.2,3.5,1.5,0.2,Iris-setosa
5.2,3.4,1.4,0.2,Iris-setosa
4.7,3.2,1.6,0.2,Iris-setosa
4.8,3.1,1.6,0.2,Iris-setosa
5.4,3.4,1.5,0.4,Iris-setosa
5.2,4.1,1.5,0.1,Iris-setosa
5.5,4.2,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
5.0,3.2,1.2,0.2,Iris-setosa
5.5,3.5,1.3,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
4.4,3.0,1.3,0.2,Iris-setosa
5.1,3.4,1.5,0.2,Iris-setosa
5.0,3.5,1.3,0.3,Iris-setosa
4.5,2.3,1.3,0.3,Iris-setosa
4.4,3.2,1.3,0.2,Iris-setosa
5.0,3.5,1.6,0.6,Iris-setosa
5.1,3.8,1.9,0.4,Iris-setosa
4.8,3.0,1.4,0.3,Iris-setosa
5.1,3.8,1.6,0.2,Iris-setosa
4.6,3.2,1.4,0.2,Iris-setosa
5.3,3.7,1.5,0.2,Iris-setosa
5.0,3.3,1.4,0.2,Iris-setosa
7.0,3.2,4.7,1.4,Iris-versicolor
6.4,3.2,4.5,1.5,Iris-versicolor
6.9,3.1,4.9,1.5,Iris-versicolor
5.5,2.3,4.0,1.3,Iris-versicolor
6.5,2.8,4.6,1.5,Iris-versicolor
5.7,2.8,4.5,1.3,Iris-versicolor
6.3,3.3,4.7,1.6,Iris-versicolor
4.9,2.4,3.3,1.0,Iris-versicolor
6.6,2.9,4.6,1.3,Iris-versicolor
5.2,2.7,3.9,1.4,Iris-versicolor
5.0,2.0,3.5,1.0,Iris-versicolor
5.9,3.0,4.2,1.5,Iris-versicolor
6.0,2.2,4.0,1.0,Iris-versicolor
6.1,2.9,4.7,1.4,Iris-versicolor
5.6,2.9,3.6,1.3,Iris-versicolor
6.7,3.1,4.4,1.4,Iris-versicolor
5.6,3.0,4.5,1.5,Iris-versicolor
5.8,2.7,4.1,1.0,Iris-versicolor
6.2,2.2,4.5,1.5,Iris-versicolor
5.6,2.5,3.9,1.1,Iris-versicolor
5.9,3.2,4.8,1.8,Iris-versicolor
6.1,2.8,4.0,1.3,Iris-versicolor
6.3,2.5,4.9,1.5,Iris-versicolor
6.1,2.8,4.7,1.2,Iris-versicolor
6.4,2.9,4.3,1.3,Iris-versicolor
6.6,3.0,4.4,1.4,Iris-versicolor
6.8,2.8,4.8,1.4,Iris-versicolor
6.7,3.0,5.0,1.7,Iris-versicolor
6.0,2.9,4.5,1.5,Iris-versicolor
5.7,2.6,3.5,1.0,Iris-versicolor
5.5,2.4,3.8,1.1,Iris-versicolor
5.5,2.4,3.7,1.0,Iris-versicolor
5.8,2.7,3.9,1.2,Iris-versicolor
6.0,2.7,5.1,1.6,Iris-versicolor
5.4,3.0,4.5,1.5,Iris-versicolor
6.0,3.4,4.5,1.6,Iris-versicolor
6.7,3.1,4.7,1.5,Iris-versicolor
6.3,2.3,4.4,1.3,Iris-versicolor
5.6,3.0,4.1,1.3,Iris-versicolor
5.5,2.5,4.0,1.3,Iris-versicolor
5.5,2.6,4.4,1.2,Iris-versicolor
6.1,3.0,4.6,1.4,Iris-versicolor
5.8,2.6,4.0,1.2,Iris-versicolor
5.0,2.3,3.3,1.0,Iris-versicolor
5.6,2.7,4.2,1.3,Iris-versicolor
5.7,3.0,4.2,1.2,Iris-versicolor
5.7,2.9,4.2,1.3,Iris-versicolor
6.2,2.9,4.3,1.3,Iris-versicolor
5.1,2.5,3.0,1.1,Iris-versicolor
5.7,2.8,4.1,1.3,Iris-versicolor
6.3,3.3,6.0,2.5,Iris-virginica
5.8,2.7,5.1,1.9,Iris-virginica
7.1,3.0,5.9,2.1,Iris-virginica
6.3,2.9,5.6,1.8,Iris-virginica
6.5,3.0,5.8,2.2,Iris-virginica
7.6,3.0,6.6,2.1,Iris-virginica
4.9,2.5,4.5,1.7,Iris-virginica
7.3,2.9,6.3,1.8,Iris-virginica
6.7,2.5,5.8,1.8,Iris-virginica
7.2,3.6,6.1,2.5,Iris-virginica
6.5,3.2,5.1,2.0,Iris-virginica
6.4,2.7,5.3,1.9,Iris-virginica
6.8,3.0,5.5,2.1,Iris-virginica
5.7,2.5,5.0,2.0,Iris-virginica
5.8,2.8,5.1,2.4,Iris-virginica
6.4,3.2,5.3,2.3,Iris-virginica
6.5,3.0,5.5,1.8,Iris-virginica
7.7,3.8,6.7,2.2,Iris-virginica
7.7,2.6,6.9,2.3,Iris-virginica
6.0,2.2,5.0,1.5,Iris-virginica
6.9,3.2,5.7,2.3,Iris-virginica
5.6,2.8,4.9,2.0,Iris-virginica
7.7,2.8,6.7,2.0,Iris-virginica
6.3,2.7,4.9,1.8,Iris-virginica
6.7,3.3,5.7,2.1,Iris-virginica
7.2,3.2,6.0,1.8,Iris-virginica
6.2,2.8,4.8,1.8,Iris-virginica
6.1,3.0,4.9,1.8,Iris-virginica
6.4,2.8,5.6,2.1,Iris-virginica
7.2,3.0,5.8,1.6,Iris-virginica
7.4,2.8,6.1,1.9,Iris-virginica
7.9,3.8,6.4,2.0,Iris-virginica
6.4,2.8,5.6,2.2,Iris-virginica
6.3,2.8,5.1,1.5,Iris-virginica
6.1,2.6,5.6,1.4,Iris-virginica
7.7,3.0,6.1,2.3,Iris-virginica
6.3,3.4,5.6,2.4,Iris-virginica
6.4,3.1,5.5,1.8,Iris-virginica
6.0,3.0,4.8,1.8,Iris-virginica
6.9,3.1,5.4,2.1,Iris-virginica
6.7,3.1,5.6,2.4,Iris-virginica
6.9,3.1,5.1,2.3,Iris-virginica
5.8,2.7,5.1,1.9,Iris-virginica
6.8,3.2,5.9,2.3,Iris-virginica
6.7,3.3,5.7,2.5,Iris-virginica
6.7,3.0,5.2,2.3,Iris-virginica
6.3,2.5,5.0,1.9,Iris-virginica
6.5,3.0,5.2,2.0,Iris-virginica
6.2,3.4,5.4,2.3,Iris-virginica
5.9,3.0,5.1,1.8,Iris-virginica
1 5.1 3.5 1.4 0.2 Iris-setosa
2 4.9 3.0 1.4 0.2 Iris-setosa
3 4.7 3.2 1.3 0.2 Iris-setosa
4 4.6 3.1 1.5 0.2 Iris-setosa
5 5.0 3.6 1.4 0.2 Iris-setosa
6 5.4 3.9 1.7 0.4 Iris-setosa
7 4.6 3.4 1.4 0.3 Iris-setosa
8 5.0 3.4 1.5 0.2 Iris-setosa
9 4.4 2.9 1.4 0.2 Iris-setosa
10 4.9 3.1 1.5 0.1 Iris-setosa
11 5.4 3.7 1.5 0.2 Iris-setosa
12 4.8 3.4 1.6 0.2 Iris-setosa
13 4.8 3.0 1.4 0.1 Iris-setosa
14 4.3 3.0 1.1 0.1 Iris-setosa
15 5.8 4.0 1.2 0.2 Iris-setosa
16 5.7 4.4 1.5 0.4 Iris-setosa
17 5.4 3.9 1.3 0.4 Iris-setosa
18 5.1 3.5 1.4 0.3 Iris-setosa
19 5.7 3.8 1.7 0.3 Iris-setosa
20 5.1 3.8 1.5 0.3 Iris-setosa
21 5.4 3.4 1.7 0.2 Iris-setosa
22 5.1 3.7 1.5 0.4 Iris-setosa
23 4.6 3.6 1.0 0.2 Iris-setosa
24 5.1 3.3 1.7 0.5 Iris-setosa
25 4.8 3.4 1.9 0.2 Iris-setosa
26 5.0 3.0 1.6 0.2 Iris-setosa
27 5.0 3.4 1.6 0.4 Iris-setosa
28 5.2 3.5 1.5 0.2 Iris-setosa
29 5.2 3.4 1.4 0.2 Iris-setosa
30 4.7 3.2 1.6 0.2 Iris-setosa
31 4.8 3.1 1.6 0.2 Iris-setosa
32 5.4 3.4 1.5 0.4 Iris-setosa
33 5.2 4.1 1.5 0.1 Iris-setosa
34 5.5 4.2 1.4 0.2 Iris-setosa
35 4.9 3.1 1.5 0.1 Iris-setosa
36 5.0 3.2 1.2 0.2 Iris-setosa
37 5.5 3.5 1.3 0.2 Iris-setosa
38 4.9 3.1 1.5 0.1 Iris-setosa
39 4.4 3.0 1.3 0.2 Iris-setosa
40 5.1 3.4 1.5 0.2 Iris-setosa
41 5.0 3.5 1.3 0.3 Iris-setosa
42 4.5 2.3 1.3 0.3 Iris-setosa
43 4.4 3.2 1.3 0.2 Iris-setosa
44 5.0 3.5 1.6 0.6 Iris-setosa
45 5.1 3.8 1.9 0.4 Iris-setosa
46 4.8 3.0 1.4 0.3 Iris-setosa
47 5.1 3.8 1.6 0.2 Iris-setosa
48 4.6 3.2 1.4 0.2 Iris-setosa
49 5.3 3.7 1.5 0.2 Iris-setosa
50 5.0 3.3 1.4 0.2 Iris-setosa
51 7.0 3.2 4.7 1.4 Iris-versicolor
52 6.4 3.2 4.5 1.5 Iris-versicolor
53 6.9 3.1 4.9 1.5 Iris-versicolor
54 5.5 2.3 4.0 1.3 Iris-versicolor
55 6.5 2.8 4.6 1.5 Iris-versicolor
56 5.7 2.8 4.5 1.3 Iris-versicolor
57 6.3 3.3 4.7 1.6 Iris-versicolor
58 4.9 2.4 3.3 1.0 Iris-versicolor
59 6.6 2.9 4.6 1.3 Iris-versicolor
60 5.2 2.7 3.9 1.4 Iris-versicolor
61 5.0 2.0 3.5 1.0 Iris-versicolor
62 5.9 3.0 4.2 1.5 Iris-versicolor
63 6.0 2.2 4.0 1.0 Iris-versicolor
64 6.1 2.9 4.7 1.4 Iris-versicolor
65 5.6 2.9 3.6 1.3 Iris-versicolor
66 6.7 3.1 4.4 1.4 Iris-versicolor
67 5.6 3.0 4.5 1.5 Iris-versicolor
68 5.8 2.7 4.1 1.0 Iris-versicolor
69 6.2 2.2 4.5 1.5 Iris-versicolor
70 5.6 2.5 3.9 1.1 Iris-versicolor
71 5.9 3.2 4.8 1.8 Iris-versicolor
72 6.1 2.8 4.0 1.3 Iris-versicolor
73 6.3 2.5 4.9 1.5 Iris-versicolor
74 6.1 2.8 4.7 1.2 Iris-versicolor
75 6.4 2.9 4.3 1.3 Iris-versicolor
76 6.6 3.0 4.4 1.4 Iris-versicolor
77 6.8 2.8 4.8 1.4 Iris-versicolor
78 6.7 3.0 5.0 1.7 Iris-versicolor
79 6.0 2.9 4.5 1.5 Iris-versicolor
80 5.7 2.6 3.5 1.0 Iris-versicolor
81 5.5 2.4 3.8 1.1 Iris-versicolor
82 5.5 2.4 3.7 1.0 Iris-versicolor
83 5.8 2.7 3.9 1.2 Iris-versicolor
84 6.0 2.7 5.1 1.6 Iris-versicolor
85 5.4 3.0 4.5 1.5 Iris-versicolor
86 6.0 3.4 4.5 1.6 Iris-versicolor
87 6.7 3.1 4.7 1.5 Iris-versicolor
88 6.3 2.3 4.4 1.3 Iris-versicolor
89 5.6 3.0 4.1 1.3 Iris-versicolor
90 5.5 2.5 4.0 1.3 Iris-versicolor
91 5.5 2.6 4.4 1.2 Iris-versicolor
92 6.1 3.0 4.6 1.4 Iris-versicolor
93 5.8 2.6 4.0 1.2 Iris-versicolor
94 5.0 2.3 3.3 1.0 Iris-versicolor
95 5.6 2.7 4.2 1.3 Iris-versicolor
96 5.7 3.0 4.2 1.2 Iris-versicolor
97 5.7 2.9 4.2 1.3 Iris-versicolor
98 6.2 2.9 4.3 1.3 Iris-versicolor
99 5.1 2.5 3.0 1.1 Iris-versicolor
100 5.7 2.8 4.1 1.3 Iris-versicolor
101 6.3 3.3 6.0 2.5 Iris-virginica
102 5.8 2.7 5.1 1.9 Iris-virginica
103 7.1 3.0 5.9 2.1 Iris-virginica
104 6.3 2.9 5.6 1.8 Iris-virginica
105 6.5 3.0 5.8 2.2 Iris-virginica
106 7.6 3.0 6.6 2.1 Iris-virginica
107 4.9 2.5 4.5 1.7 Iris-virginica
108 7.3 2.9 6.3 1.8 Iris-virginica
109 6.7 2.5 5.8 1.8 Iris-virginica
110 7.2 3.6 6.1 2.5 Iris-virginica
111 6.5 3.2 5.1 2.0 Iris-virginica
112 6.4 2.7 5.3 1.9 Iris-virginica
113 6.8 3.0 5.5 2.1 Iris-virginica
114 5.7 2.5 5.0 2.0 Iris-virginica
115 5.8 2.8 5.1 2.4 Iris-virginica
116 6.4 3.2 5.3 2.3 Iris-virginica
117 6.5 3.0 5.5 1.8 Iris-virginica
118 7.7 3.8 6.7 2.2 Iris-virginica
119 7.7 2.6 6.9 2.3 Iris-virginica
120 6.0 2.2 5.0 1.5 Iris-virginica
121 6.9 3.2 5.7 2.3 Iris-virginica
122 5.6 2.8 4.9 2.0 Iris-virginica
123 7.7 2.8 6.7 2.0 Iris-virginica
124 6.3 2.7 4.9 1.8 Iris-virginica
125 6.7 3.3 5.7 2.1 Iris-virginica
126 7.2 3.2 6.0 1.8 Iris-virginica
127 6.2 2.8 4.8 1.8 Iris-virginica
128 6.1 3.0 4.9 1.8 Iris-virginica
129 6.4 2.8 5.6 2.1 Iris-virginica
130 7.2 3.0 5.8 1.6 Iris-virginica
131 7.4 2.8 6.1 1.9 Iris-virginica
132 7.9 3.8 6.4 2.0 Iris-virginica
133 6.4 2.8 5.6 2.2 Iris-virginica
134 6.3 2.8 5.1 1.5 Iris-virginica
135 6.1 2.6 5.6 1.4 Iris-virginica
136 7.7 3.0 6.1 2.3 Iris-virginica
137 6.3 3.4 5.6 2.4 Iris-virginica
138 6.4 3.1 5.5 1.8 Iris-virginica
139 6.0 3.0 4.8 1.8 Iris-virginica
140 6.9 3.1 5.4 2.1 Iris-virginica
141 6.7 3.1 5.6 2.4 Iris-virginica
142 6.9 3.1 5.1 2.3 Iris-virginica
143 5.8 2.7 5.1 1.9 Iris-virginica
144 6.8 3.2 5.9 2.3 Iris-virginica
145 6.7 3.3 5.7 2.5 Iris-virginica
146 6.7 3.0 5.2 2.3 Iris-virginica
147 6.3 2.5 5.0 1.9 Iris-virginica
148 6.5 3.0 5.2 2.0 Iris-virginica
149 6.2 3.4 5.4 2.3 Iris-virginica
150 5.9 3.0 5.1 1.8 Iris-virginica

View File

@@ -1,94 +0,0 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
import numpy as np
import pyspark
import os
import urllib
import sys
from pyspark.sql.functions import *
from pyspark.ml.classification import *
from pyspark.ml.evaluation import *
from pyspark.ml.feature import *
from pyspark.sql.types import StructType, StructField
from pyspark.sql.types import DoubleType, IntegerType, StringType
from azureml.core.run import Run
# initialize logger
run = Run.get_context()
# start Spark session
spark = pyspark.sql.SparkSession.builder.appName('Iris').getOrCreate()
# print runtime versions
print('****************')
print('Python version: {}'.format(sys.version))
print('Spark version: {}'.format(spark.version))
print('****************')
# load iris.csv into Spark dataframe
schema = StructType([
StructField("sepal-length", DoubleType()),
StructField("sepal-width", DoubleType()),
StructField("petal-length", DoubleType()),
StructField("petal-width", DoubleType()),
StructField("class", StringType())
])
data = spark.read.csv('iris.csv', header=False, schema=schema)
print("First 10 rows of Iris dataset:")
data.show(10)
# vectorize all numerical columns into a single feature column
feature_cols = data.columns[:-1]
assembler = pyspark.ml.feature.VectorAssembler(
inputCols=feature_cols, outputCol='features')
data = assembler.transform(data)
# convert text labels into indices
data = data.select(['features', 'class'])
label_indexer = pyspark.ml.feature.StringIndexer(
inputCol='class', outputCol='label').fit(data)
data = label_indexer.transform(data)
# only select the features and label column
data = data.select(['features', 'label'])
print("Reading for machine learning")
data.show(10)
# change regularization rate and you will likely get a different accuracy.
reg = 0.01
# load regularization rate from argument if present
if len(sys.argv) > 1:
reg = float(sys.argv[1])
# log regularization rate
run.log("Regularization Rate", reg)
# use Logistic Regression to train on the training set
train, test = data.randomSplit([0.70, 0.30])
lr = pyspark.ml.classification.LogisticRegression(regParam=reg)
model = lr.fit(train)
# predict on the test set
prediction = model.transform(test)
print("Prediction")
prediction.show(10)
# evaluate the accuracy of the model using the test set
evaluator = pyspark.ml.evaluation.MulticlassClassificationEvaluator(
metricName='accuracy')
accuracy = evaluator.evaluate(prediction)
print()
print('#####################################')
print('Regularization rate is {}'.format(reg))
print("Accuracy is {}".format(accuracy))
print('#####################################')
print()
# log accuracy
run.log('Accuracy', accuracy)

View File

@@ -1,224 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 00. Configuration\n",
"\n",
"In this example you will create an Azure Machine Learning `Workspace` object and initialize your notebook directory to easily reload this object from a configuration file. Typically you will only need to run this once per notebook directory, and all other notebooks in this directory or any sub-directories will automatically use the settings you indicate here.\n",
"\n",
"\n",
"## Prerequisites:\n",
"\n",
"Before running this notebook, run the `automl_setup` script described in README.md.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register Machine Learning Services Resource Provider\n",
"\n",
"Microsoft.MachineLearningServices only needs to be registed once in the subscription.\n",
"To register it:\n",
"1. Start the Azure portal.\n",
"2. Select your `All services` and then `Subscription`.\n",
"3. Select the subscription that you want to use.\n",
"4. Click on `Resource providers`\n",
"3. Click the `Register` link next to Microsoft.MachineLearningServices"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the Azure ML Core SDK Version to Validate Your Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize an Azure ML Workspace\n",
"### What is an Azure ML Workspace and Why Do I Need One?\n",
"\n",
"An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n",
"\n",
"\n",
"### What do I Need?\n",
"\n",
"To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n",
"* A name for your workspace. You can choose one.\n",
"* Your subscription id. Use the `id` value from the `az account show` command output above.\n",
"* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n",
"* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"subscription_id = \"<subscription_id>\"\n",
"resource_group = \"myrg\"\n",
"workspace_name = \"myws\"\n",
"workspace_region = \"eastus2\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Workspace\n",
"If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n",
"\n",
"This will fail when:\n",
"1. The workspace already exists.\n",
"2. You do not have permission to create a workspace in the resource group.\n",
"3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n",
"\n",
"If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n",
"\n",
"**Note:** Creation of a new workspace can take several minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the Workspace class and check the Azure ML SDK version.\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region)\n",
"ws.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configuring Your Local Environment\n",
"You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace(workspace_name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group)\n",
"\n",
"# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then load the workspace from this config file from any notebook in the current directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load workspace configuration from ./aml_config/config.json file.\n",
"my_workspace = Workspace.from_config()\n",
"my_workspace.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Folder to Host All Sample Projects\n",
"Finally, create a folder where all the sample projects will be hosted."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"sample_projects_folder = './sample_projects'\n",
"\n",
"if not os.path.isdir(sample_projects_folder):\n",
" os.mkdir(sample_projects_folder)\n",
" \n",
"print('Sample projects will be created in {}.'.format(sample_projects_folder))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the rest of the sample notebooks."
]
}
],
"metadata": {
"authors": [
{
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,449 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 15b: Regression with ensembling on remote compute\n",
"\n",
"In this example we use the scikit-learn's [diabetes dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html) to showcase how you can use AutoML for a simple regression problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig`which enables an extra ensembling iteration.\n",
"3. Train the model using remote compute.\n",
"4. Explore the results.\n",
"5. Test the best fitted model.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an Experiment\n",
"\n",
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"import os\n",
"import random\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"import numpy as np\n",
"import pandas as pd\n",
"from sklearn import datasets\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment and specify the project folder.\n",
"experiment_name = 'automl-local-regression'\n",
"project_folder = './sample_projects/automl-local-regression'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data = output, index = ['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Diagnostics\n",
"\n",
"Opt-in diagnostics for better experience, quality, and security of future releases."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Remote Linux DSVM\n",
"**Note:** If creation fails with a message about Marketplace purchase eligibilty, start creation of a DSVM through the [Azure portal](https://portal.azure.com), and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled this setting, you can exit the portal without actually creating the DSVM, and creation of the DSVM through the notebook should work."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import DsvmCompute\n",
"\n",
"dsvm_name = 'mydsvm'\n",
"try:\n",
" dsvm_compute = DsvmCompute(ws, dsvm_name)\n",
" print('Found an existing DSVM.')\n",
"except:\n",
" print('Creating a new DSVM.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Get Data File\n",
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
"In this example, the `get_data()` function returns data using scikit-learn's `diabetes` dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists(project_folder):\n",
" os.makedirs(project_folder)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $project_folder/get_data.py\n",
"\n",
"# Load the diabetes dataset, a well-known built-in small dataset that comes with scikit-learn.\n",
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"def get_data():\n",
" X, y = load_diabetes(return_X_y = True)\n",
"\n",
" columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n",
"\n",
" X_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size = 0.2, random_state = 0)\n",
" X_valid, X_test, y_valid, y_test = train_test_split(X_temp, y_temp, test_size = 0.5, random_state = 0)\n",
" return { \"X\" : X_train, \"y\" : y_train, \"X_valid\": X_valid, \"y_valid\": y_valid }"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure AutoML\n",
"\n",
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**enable_ensembling**|Flag to enable an ensembling iteration after all the other iterations complete.|\n",
"|**ensemble_iterations**|Number of iterations during which we choose a fitted pipeline to be part of the final ensemble.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'regression',\n",
" iteration_timeout_minutes = 10,\n",
" iterations = 20,\n",
" primary_metric = 'spearman_correlation',\n",
" debug_log = 'regression.log',\n",
" verbosity = logging.INFO,\n",
" compute_target = dsvm_compute,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" enable_ensembling = True,\n",
" ensemble_iterations = 5,\n",
" path = project_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Model\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore the Results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Widget for Monitoring Runs\n",
"\n",
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
"\n",
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(local_run).show() "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Retrieve All Child Runs\n",
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"children = list(local_run.get_children())\n",
"metricslist = {}\n",
"for run in children:\n",
" properties = run.get_properties()\n",
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
" metricslist[int(properties['iteration'])] = metrics\n",
"\n",
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
"rundata"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Best Model Based on Any Other Metric\n",
"Show the run and the model that has the smallest `root_mean_squared_error` value."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lookup_metric = \"root_mean_squared_error\"\n",
"best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n",
"print(best_run)\n",
"print(fitted_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Model (Ensemble)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Predict on training and test set, and calculate residual values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.datasets import load_diabetes\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"X, y = load_diabetes(return_X_y = True)\n",
"\n",
"X_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size = 0.2, random_state = 0)\n",
"X_valid, X_test, y_valid, y_test = train_test_split(X_temp, y_temp, test_size = 0.5, random_state = 0)\n",
"\n",
"\n",
"y_pred_train = fitted_model.predict(X_train)\n",
"y_residual_train = y_train - y_pred_train\n",
"\n",
"y_pred_test = fitted_model.predict(X_test)\n",
"y_residual_test = y_test - y_pred_test"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"from sklearn import datasets\n",
"from sklearn.metrics import mean_squared_error, r2_score\n",
"\n",
"# Set up a multi-plot chart.\n",
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})\n",
"f.suptitle('Regression Residual Values', fontsize = 18)\n",
"f.set_figheight(6)\n",
"f.set_figwidth(16)\n",
"\n",
"# Plot residual values of training set.\n",
"a0.axis([0, 360, -200, 200])\n",
"a0.plot(y_residual_train, 'bo', alpha = 0.5)\n",
"a0.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)\n",
"a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)), fontsize = 12)\n",
"a0.set_xlabel('Training samples', fontsize = 12)\n",
"a0.set_ylabel('Residual Values', fontsize = 12)\n",
"\n",
"# Plot a histogram.\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step');\n",
"a0.hist(y_residual_train, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10);\n",
"\n",
"# Plot residual values of test set.\n",
"a1.axis([0, 90, -200, 200])\n",
"a1.plot(y_residual_test, 'bo', alpha = 0.5)\n",
"a1.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)\n",
"a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)), fontsize = 12)\n",
"a1.set_xlabel('Test samples', fontsize = 12)\n",
"a1.set_yticklabels([])\n",
"\n",
"# Plot a histogram.\n",
"a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', bins = 10, histtype = 'step')\n",
"a1.hist(y_residual_test, orientation = 'horizontal', color = 'b', alpha = 0.2, bins = 10)\n",
"\n",
"plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "ratanase"
}
],
"kernelspec": {
"display_name": "Python [default]",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

313
configuration.ipynb Normal file
View File

@@ -0,0 +1,313 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 00. Installation and configuration\n",
"This notebook configures your library of notebooks to connect to an Azure Machine Learning Workspace. In this case, a library contains all of the notebooks in the current folder and any nested folders. You can configure this notebook to use an existing workspace or create a new workspace.\n",
"\n",
"## What is an Azure ML Workspace and why do I need one?\n",
"\n",
"An AML Workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an AML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Access Azure Subscription\n",
"\n",
"In order to create an AML Workspace, first you need access to an Azure Subscription. You can [create your own](https://azure.microsoft.com/en-us/free/) or get your existing subscription information from the [Azure portal](https://portal.azure.com).\n",
"\n",
"### 2. If you're running on your own local environment, install Azure ML SDK and other libraries\n",
"\n",
"If you are running in your own environment, follow [SDK installation instructions](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment). If you are running in Azure Notebooks or another Microsoft managed environment, the SDK is already installed.\n",
"\n",
"Also install following libraries to your environment. Many of the example notebooks depend on them\n",
"\n",
"```\n",
"(myenv) $ conda install -y matplotlib tqdm scikit-learn\n",
"```\n",
"\n",
"Once installation is complete, check the Azure ML SDK version:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"install"
]
},
"outputs": [],
"source": [
"import azureml.core\n",
"\n",
"print(\"SDK Version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Make sure your subscription is registered to use ACI\n",
"Azure Machine Learning makes use of Azure Container Instance (ACI). You need to ensure your subscription has been registered to use ACI in order be able to deploy a dev/test web service. If you have run through the quickstart experience you have already performed this step. Otherwise you will need to use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and execute the following commands.\n",
"\n",
"```shell\n",
"# check to see if ACI is already registered\n",
"(myenv) $ az provider show -n Microsoft.ContainerInstance -o table\n",
"\n",
"# if ACI is not registered, run this command.\n",
"# note you need to be the subscription owner in order to execute this command successfully.\n",
"(myenv) $ az provider register -n Microsoft.ContainerInstance\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up your Azure Machine Learning workspace\n",
"\n",
"### Option 1: You have workspace already\n",
"If you ran the Azure Machine Learning [quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-get-started) in Azure Notebooks, you already have a configured workspace! You can go to your Azure Machine Learning Getting Started library, view *config.json* file, and copy-paste the values for subscription ID, resource group and workspace name below.\n",
"\n",
"If you have a workspace created another way, [these instructions](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#create-workspace-configuration-file) describe how to get your subscription and workspace information.\n",
"\n",
"If this cell succeeds, you're done configuring this library! Otherwise continue to follow the instructions in the rest of the notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", \"<my-subscription-id>\")\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", \"<my-resource-group>\")\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", \"<my-workspace-name>\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"try:\n",
" ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)\n",
" ws.write_config()\n",
" print('Workspace configuration succeeded. You are all set!')\n",
"except:\n",
" print('Workspace not found. Run the cells below.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Option 2: You don't have workspace yet\n",
"\n",
"\n",
"#### Requirements\n",
"\n",
"Inside your Azure subscription, you will need access to a _resource group_, which organizes Azure resources and provides a default region for the resources in a group. You can see what resource groups to which you have access, or create a new one in the [Azure portal](https://portal.azure.com). If you don't have a resource group, the create workspace command will create one for you using the name you provide.\n",
"\n",
"To create or access an Azure ML Workspace, you will need to import the AML library and the following information:\n",
"* A name for your workspace\n",
"* Your subscription id\n",
"* The resource group name\n",
"\n",
"**Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Supported Azure Regions\n",
"Specify a region where your workspace will be located from the list of [Azure Machine Learning regions](https://linktoregions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"subscription_id = os.environ.get(\"SUBSCRIPTION_ID\", \"<my-subscription-id>\")\n",
"resource_group = os.environ.get(\"RESOURCE_GROUP\", \"my-aml-resource-group\")\n",
"workspace_name = os.environ.get(\"WORKSPACE_NAME\", \"my-first-workspace\")\n",
"\n",
"workspace_region = os.environ.get(\"WORKSPACE_REGION\", \"eastus2\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Create the workspace\n",
"This cell will create an AML workspace for you in a subscription provided you have the correct permissions.\n",
"\n",
"This will fail when:\n",
"1. You do not have permission to create a workspace in the resource group\n",
"2. You do not have permission to create a resource group if it's non-existing.\n",
"2. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription\n",
"\n",
"If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"# import the Workspace class and check the azureml SDK version\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.create(name = workspace_name,\n",
" subscription_id = subscription_id,\n",
" resource_group = resource_group, \n",
" location = workspace_region,\n",
" create_resource_group = True,\n",
" exist_ok = True)\n",
"ws.get_details()\n",
"ws.write_config()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create compute resources for your training experiments\n",
"\n",
"Many of the subsequent examples use Azure Machine Learning managed compute (AmlCompute) to train models at scale. To create a **CPU** cluster now, run the cell below. The autoscale settings mean that the cluster will scale down to 0 nodes when inactive and up to 4 nodes when busy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpucluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',\n",
" max_nodes=4)\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"cpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To create a **GPU** cluster, run the cell below. Note that your subscription must have sufficient quota for GPU VMs or the command will fail. To increase quota, see [these instructions](https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your GPU cluster\n",
"gpu_cluster_name = \"gpucluster\"\n",
"\n",
"# Check if cluster exists already\n",
"try:\n",
" gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',\n",
" max_nodes=4)\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)\n",
"\n",
"gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Success!\n",
"Great, you are ready to move on to the rest of the sample notebooks. A good place to start is the [01.train-model tutorial](./tutorials/01.train-model.ipynb) to learn how to train and then deploy an image classification model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"authors": [
{
"name": "roastala"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1 +0,0 @@
{"cells":[{"cell_type":"markdown","source":["Azure ML & Azure Databricks notebooks by Parashar Shah.\n\nCopyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License."],"metadata":{}},{"cell_type":"markdown","source":["Please ensure you have run this notebook before proceeding."],"metadata":{}},{"cell_type":"markdown","source":["Now we support installing AML SDK as library from GUI. When attaching a library follow this https://docs.databricks.com/user-guide/libraries.html and add the below string as your PyPi package (during private preview). You can select the option to attach the library to all clusters or just one cluster.\n\nProvide this full string to install the SDK:\n\nazureml-sdk[databricks]"],"metadata":{}},{"cell_type":"code","source":["import azureml.core\n\n# Check core SDK version number - based on build number of preview/master.\nprint(\"SDK version:\", azureml.core.VERSION)"],"metadata":{},"outputs":[],"execution_count":4},{"cell_type":"code","source":["subscription_id = \"<your-subscription-id>\"\nresource_group = \"<your-existing-resource-group>\"\nworkspace_name = \"<a-new-or-existing-workspace; it is unrelated to Databricks workspace>\"\nworkspace_region = \"<your-resource group-region>\""],"metadata":{},"outputs":[],"execution_count":5},{"cell_type":"code","source":["# import the Workspace class and check the azureml SDK version\n# exist_ok checks if workspace exists or not.\n\nfrom azureml.core import Workspace\n\nws = Workspace.create(name = workspace_name,\n subscription_id = subscription_id,\n resource_group = resource_group, \n location = workspace_region,\n exist_ok=True)\n\nws.get_details()"],"metadata":{},"outputs":[],"execution_count":6},{"cell_type":"code","source":["ws = Workspace(workspace_name = workspace_name,\n subscription_id = subscription_id,\n resource_group = resource_group)\n\n# persist the subscription id, resource group name, and workspace name in aml_config/config.json.\nws.write_config()"],"metadata":{},"outputs":[],"execution_count":7},{"cell_type":"code","source":["%sh\ncat /databricks/driver/aml_config/config.json"],"metadata":{},"outputs":[],"execution_count":8},{"cell_type":"code","source":["# import the Workspace class and check the azureml SDK version\nfrom azureml.core import Workspace\n\nws = Workspace.from_config()\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep = '\\n')"],"metadata":{},"outputs":[],"execution_count":9},{"cell_type":"code","source":["dbutils.notebook.exit(\"success\")"],"metadata":{},"outputs":[],"execution_count":10},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":11}],"metadata":{"name":"01.Installation_and_Configuration","notebookId":3874566296719377},"nbformat":4,"nbformat_minor":0}

View File

@@ -1 +0,0 @@
{"cells":[{"cell_type":"markdown","source":["Azure ML & Azure Databricks notebooks by Parashar Shah.\n\nCopyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License."],"metadata":{}},{"cell_type":"markdown","source":["Please ensure you have run all previous notebooks in sequence before running this."],"metadata":{}},{"cell_type":"markdown","source":["#Data Ingestion"],"metadata":{}},{"cell_type":"code","source":["import os\nimport urllib"],"metadata":{},"outputs":[],"execution_count":4},{"cell_type":"code","source":["# Download AdultCensusIncome.csv from Azure CDN. This file has 32,561 rows.\nbasedataurl = \"https://amldockerdatasets.azureedge.net\"\ndatafile = \"AdultCensusIncome.csv\"\ndatafile_dbfs = os.path.join(\"/dbfs\", datafile)\n\nif os.path.isfile(datafile_dbfs):\n print(\"found {} at {}\".format(datafile, datafile_dbfs))\nelse:\n print(\"downloading {} to {}\".format(datafile, datafile_dbfs))\n urllib.request.urlretrieve(os.path.join(basedataurl, datafile), datafile_dbfs)"],"metadata":{},"outputs":[],"execution_count":5},{"cell_type":"code","source":["# Create a Spark dataframe out of the csv file.\ndata_all = sqlContext.read.format('csv').options(header='true', inferSchema='true', ignoreLeadingWhiteSpace='true', ignoreTrailingWhiteSpace='true').load(datafile)\nprint(\"({}, {})\".format(data_all.count(), len(data_all.columns)))\ndata_all.printSchema()"],"metadata":{},"outputs":[],"execution_count":6},{"cell_type":"code","source":["#renaming columns\ncolumns_new = [col.replace(\"-\", \"_\") for col in data_all.columns]\ndata_all = data_all.toDF(*columns_new)\ndata_all.printSchema()"],"metadata":{},"outputs":[],"execution_count":7},{"cell_type":"code","source":["display(data_all.limit(5))"],"metadata":{},"outputs":[],"execution_count":8},{"cell_type":"markdown","source":["#Data Preparation"],"metadata":{}},{"cell_type":"code","source":["# Choose feature columns and the label column.\nlabel = \"income\"\nxvals_all = set(data_all.columns) - {label}\n\n#dbutils.widgets.remove(\"xvars_multiselect\")\ndbutils.widgets.removeAll()\n\ndbutils.widgets.multiselect('xvars_multiselect', 'hours_per_week', xvals_all)\nxvars_multiselect = dbutils.widgets.get(\"xvars_multiselect\")\nxvars = xvars_multiselect.split(\",\")\n\nprint(\"label = {}\".format(label))\nprint(\"features = {}\".format(xvars))\n\ndata = data_all.select([*xvars, label])\n\n# Split data into train and test.\ntrain, test = data.randomSplit([0.75, 0.25], seed=123)\n\nprint(\"train ({}, {})\".format(train.count(), len(train.columns)))\nprint(\"test ({}, {})\".format(test.count(), len(test.columns)))"],"metadata":{},"outputs":[],"execution_count":10},{"cell_type":"markdown","source":["#Data Persistence"],"metadata":{}},{"cell_type":"code","source":["# Write the train and test data sets to intermediate storage\ntrain_data_path = \"AdultCensusIncomeTrain\"\ntest_data_path = \"AdultCensusIncomeTest\"\n\ntrain_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTrain\")\ntest_data_path_dbfs = os.path.join(\"/dbfs\", \"AdultCensusIncomeTest\")\n\ntrain.write.mode('overwrite').parquet(train_data_path)\ntest.write.mode('overwrite').parquet(test_data_path)\nprint(\"train and test datasets saved to {} and {}\".format(train_data_path_dbfs, test_data_path_dbfs))"],"metadata":{},"outputs":[],"execution_count":12},{"cell_type":"code","source":["dbutils.notebook.exit(\"success\")"],"metadata":{},"outputs":[],"execution_count":13},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":14}],"metadata":{"name":"02.Ingest_data","notebookId":3874566296719393},"nbformat":4,"nbformat_minor":0}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
{"cells":[{"cell_type":"markdown","source":["Azure ML & Azure Databricks notebooks by Parashar Shah.\n\nCopyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License."],"metadata":{}},{"cell_type":"markdown","source":["Please ensure you have run all previous notebooks in sequence before running this. This notebook uses image from ACI notebook for deploying to AKS."],"metadata":{}},{"cell_type":"code","source":["from azureml.core import Workspace\nimport azureml.core\n\n# Check core SDK version number\nprint(\"SDK version:\", azureml.core.VERSION)\n\n#'''\nws = Workspace.from_config()\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep = '\\n')\n#'''"],"metadata":{},"outputs":[],"execution_count":3},{"cell_type":"code","source":["# List images by ws\n\nfrom azureml.core.image import ContainerImage\nfor i in ContainerImage.list(workspace = ws):\n print('{}(v.{} [{}]) stored at {} with build log {}'.format(i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))"],"metadata":{},"outputs":[],"execution_count":4},{"cell_type":"code","source":["from azureml.core.image import Image\nmyimage = Image(workspace=ws, id=\"aciws:25\")"],"metadata":{},"outputs":[],"execution_count":5},{"cell_type":"code","source":["#create AKS compute\n#it may take 20-25 minutes to create a new cluster\n\nfrom azureml.core.compute import AksCompute, ComputeTarget\n\n# Use the default configuration (can also provide parameters to customize)\nprov_config = AksCompute.provisioning_configuration()\n\naks_name = 'ps-aks-clus2' \n\n# Create the cluster\naks_target = ComputeTarget.create(workspace = ws, \n name = aks_name, \n provisioning_configuration = prov_config)\n\naks_target.wait_for_completion(show_output = True)\n\nprint(aks_target.provisioning_state)\nprint(aks_target.provisioning_errors)"],"metadata":{},"outputs":[],"execution_count":6},{"cell_type":"code","source":["from azureml.core.webservice import Webservice\nhelp( Webservice.deploy_from_image)"],"metadata":{},"outputs":[],"execution_count":7},{"cell_type":"code","source":["from azureml.core.webservice import Webservice, AksWebservice\nfrom azureml.core.image import ContainerImage\n\n#Set the web service configuration (using default here)\naks_config = AksWebservice.deploy_configuration()\n\n#unique service name\nservice_name ='ps-aks-service'\n\n# Webservice creation using single command, there is a variant to use image directly as well.\naks_service = Webservice.deploy_from_image(\n workspace=ws, \n name=service_name,\n deployment_config = aks_config,\n image = myimage,\n deployment_target = aks_target\n )\n\naks_service.wait_for_deployment(show_output=True)"],"metadata":{},"outputs":[],"execution_count":8},{"cell_type":"code","source":["#for using the Web HTTP API \nprint(aks_service.scoring_uri)\nprint(aks_service.get_keys())"],"metadata":{},"outputs":[],"execution_count":9},{"cell_type":"code","source":["import json\n\n#get the some sample data\ntest_data_path = \"AdultCensusIncomeTest\"\ntest = spark.read.parquet(test_data_path).limit(5)\n\ntest_json = json.dumps(test.toJSON().collect())\n\nprint(test_json)"],"metadata":{},"outputs":[],"execution_count":10},{"cell_type":"code","source":["#using data defined above predict if income is >50K (1) or <=50K (0)\naks_service.run(input_data=test_json)"],"metadata":{},"outputs":[],"execution_count":11},{"cell_type":"code","source":["#comment to not delete the web service\naks_service.delete()\n#image.delete()\n#model.delete()\n#aks_target.delete()"],"metadata":{},"outputs":[],"execution_count":12},{"cell_type":"code","source":[""],"metadata":{},"outputs":[],"execution_count":13}],"metadata":{"name":"04.DeploytoACI","notebookId":3874566296719318},"nbformat":4,"nbformat_minor":0}

View File

@@ -1,29 +0,0 @@
# Azure Databricks - Azure Machine Learning SDK Sample Notebooks
**NOTE**: With the latest version of Azure Machine Learning SDK, there are some API changes due to which previous version of notebooks will not work.
Please remove the previous SDK version and install the latest SDK by installing **azureml-sdk[databricks]** as a PyPi library in Azure Databricks workspace.
**NOTE**: Please create your Azure Databricks cluster as v4.x (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: Some packages like psutil upgrade libs that can cause a conflict, please install such packages by freezing lib version. Eg. "pstuil **cryptography==1.5 pyopenssl==16.0.0 ipython=2.2.0**" to avoid install error. This issue is related to Databricks and not related to AML SDK.
**NOTE**: You should at least have contributor access to your Azure subcription to run some of the notebooks.
The iPython Notebooks have to be run sequentially after making changes based on your subscription. The corresponding DBC archive contains all the notebooks and can be imported into your Databricks workspace. You can the run notebooks after importing .dbc instead of downloading individually.
This set of notebooks are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks. For details on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks)
(Recommended) [Azure Databricks AML SDK notebooks](Databricks_AMLSDK_github.dbc) A single DBC package to import all notebooks in your Azure Databricks workspace.
01. [Installation and Configuration](01.Installation_and_Configuration.ipynb): Install the Azure ML Python SDK and Initialize an Azure ML Workspace and save the Workspace configuration file.
02. [Ingest data](02.Ingest_data.ipynb): Download the Adult Census Income dataset and split it into train and test sets.
03. [Build model](03a.Build_model.ipynb): Train a binary classification model in Azure Databricks with a Spark ML Pipeline.
04. [Build model with Run History](03b.Build_model_runHistory.ipynb): Train model and also capture run history (tracking) with Azure ML Python SDK.
05. [Deploy to ACI](04.Deploy_to_ACI.ipynb): Deploy model to Azure Container Instance (ACI) with Azure ML Python SDK.
06. [Deploy to AKS](04.Deploy_to_AKS_existingImage.ipynb): Deploy model to Azure Kubernetis Service (AKS) with Azure ML Python SDK from an existing Image with model, conda and score file.
Copyright (c) Microsoft Corporation. All rights reserved.
All notebooks in this folder are licensed under the MIT License.
Apache®, Apache Spark, and Spark® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,16 @@
## Examples to get started with Azure Machine Learning service
Learn how to use Azure Machine Learning services for experimentation and model management.
As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order.
* [train-within-notebook](train-within-notebook/train-within-notebook.ipynb): Train a model hile tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
* [train-on-local](train-on-local/train-on-local.ipynb): Learn how to submit a run and use Azure ML managed run configuration.
* [train-on-aci](train-on-aci/train-on-aci.ipynb): Submit a remote run on serverless Docker-based compute.
* [train-on-remote-vm](train-on-remote-vm/train-on-remote-vm.ipynb): Use Data Science Virtual Machine as a target for remote runs.
* [logging-api](logging-api/logging-api.ipynb): Learn about the details of logging metrics to run history.
* [register-model-create-image-deploy-service](register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb): Learn about the details of model management.
* [production-deploy-to-aks](production-deploy-to-aks/production-deploy-to-aks.ipynb) Deploy a model to production at scale on Azure Kubernetes Service.
* [enable-data-collection-for-models-in-aks](enable-data-collection-for-models-in-aks/enable-data-collection-for-models-in-aks.ipynb) Learn about data collection APIs for deployed model.
* [enable-app-insights-in-production-service](enable-app-insights-in-production-serviceenable-app-insights-in-production-service.ipynb) Learn how to use App Insights with production web service.

View File

@@ -139,7 +139,7 @@
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",

View File

@@ -53,6 +53,7 @@
"import logging\n",
"import os\n",
"import random\n",
"import time\n",
"\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
@@ -136,7 +137,28 @@
" print('Creating a new DSVM.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
" dsvm_compute.wait_for_completion(show_output = True)\n",
" print(\"Waiting one minute for ssh to be accessible\")\n",
" time.sleep(60) # Wait for ssh to be accessible"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"conda_run_config.target = dsvm_compute\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])\n",
"conda_run_config.environment.python.conda_dependencies = cd"
]
},
{
@@ -191,7 +213,7 @@
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
@@ -217,7 +239,7 @@
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder, \n",
" compute_target = dsvm_compute,\n",
" run_configuration=conda_run_config,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )\n"

View File

@@ -134,7 +134,7 @@
"from azureml.core.compute import ComputeTarget\n",
"\n",
"# Choose a name for your cluster.\n",
"batchai_cluster_name = \"cpucluster\"\n",
"batchai_cluster_name = \"automlcl\"\n",
"\n",
"found = False\n",
"# Check if this compute target already exists in the workspace.\n",
@@ -160,6 +160,27 @@
" # For a more detailed view of current Batch AI cluster status, use the 'status' property."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to the Batch AI cluster\n",
"conda_run_config.target = compute_target\n",
"conda_run_config.environment.docker.enabled = True\n",
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])\n",
"conda_run_config.environment.python.conda_dependencies = cd"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -212,7 +233,7 @@
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
@@ -238,7 +259,7 @@
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" path = project_folder,\n",
" compute_target = compute_target,\n",
" run_configuration=conda_run_config,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )\n"

View File

@@ -131,7 +131,7 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import RemoteCompute\n",
"from azureml.core.compute import ComputeTarget, RemoteCompute\n",
"import time\n",
"\n",
"# Add your VM information below\n",
@@ -147,7 +147,8 @@
" print('Using existing compute.')\n",
" dsvm_compute = ws.compute_targets[compute_name]\n",
"else:\n",
" RemoteCompute.attach(workspace=ws, name=compute_name, address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)\n",
" attach_config = RemoteCompute.attach_configuration(address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)\n",
" ComputeTarget.attach(workspace=ws, name=compute_name, attach_configuration=attach_config)\n",
"\n",
" while ws.compute_targets[compute_name].provisioning_state == 'Creating':\n",
" time.sleep(1)\n",
@@ -157,7 +158,26 @@
" if dsvm_compute.provisioning_state == 'Failed':\n",
" print('Attached failed.')\n",
" print(dsvm_compute.provisioning_errors)\n",
" dsvm_compute.delete()"
" dsvm_compute.detach()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"# Set compute target to the Linux DSVM\n",
"conda_run_config.target = dsvm_compute\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])\n",
"conda_run_config.environment.python.conda_dependencies = cd"
]
},
{
@@ -242,12 +262,13 @@
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.|\n",
"|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.|\n",
"|**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.\n",
"|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.<br>Default is *1*; you can set it to *-1* to use all cores.|"
]
},
@@ -268,7 +289,7 @@
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" path = project_folder,\n",
" compute_target = dsvm_compute,\n",
" run_configuration=conda_run_config,\n",
" data_script = project_folder + \"/get_data.py\",\n",
" **automl_settings\n",
" )\n"
@@ -326,6 +347,23 @@
"remote_run.wait_for_completion(show_output = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pre-process cache cleanup\n",
"The preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run.clean_preprocessor_cache()"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -367,7 +405,7 @@
"outputs": [],
"source": [
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
"remote_run.cancel()\n",
"# remote_run.cancel()\n",
"\n",
"# Cancel iteration 1 and move onto iteration 2.\n",
"# remote_run.cancel_iteration(1)"

View File

@@ -160,7 +160,7 @@
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",

View File

@@ -162,7 +162,7 @@
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.<br>**Note:** If input data is sparse, you cannot use *True*.|\n",

View File

@@ -135,7 +135,9 @@
"except:\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size=\"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name=compute_target_name, provisioning_configuration=dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output=True)"
" dsvm_compute.wait_for_completion(show_output=True)\n",
" print(\"Waiting one minute for ssh to be accessible\")\n",
" time.sleep(60) # Wait for ssh to be accessible"
]
},
{
@@ -282,17 +284,13 @@
"\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"import os\n",
"from os.path import expanduser, join, dirname\n",
"\n",
"def get_data():\n",
" # Burning man 2016 data\n",
" df = pd.read_csv(\"/tmp/azureml_runs/data/data.tsv\", delimiter=\"\\t\", quotechar='\"')\n",
" # get integer labels\n",
" le = LabelEncoder()\n",
" le.fit(df[\"Label\"].values)\n",
" y = le.transform(df[\"Label\"].values)\n",
" y = df[\"Label\"].values\n",
" X = df.drop([\"Label\"], axis=1)\n",
"\n",
" X_train, _, y_train, _ = train_test_split(X, y, test_size=0.1, random_state=42)\n",
@@ -312,12 +310,13 @@
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
"|**n_cross_validations**|Number of cross validation splits|\n",
"|**max_concurrent_iterations**|Max number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM\n",
"|**preprocess**| *True/False* <br>Setting this to *True* enables Auto ML to perform preprocessing <br>on the input to handle *missing data*, and perform some common *feature extraction*|\n",
"|**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|\n",
"|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> Default is *1*, you can set it to *-1* to use all cores|"
]
},
@@ -445,6 +444,23 @@
"# remote_run.cancel_iteration(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pre-process cache cleanup\n",
"The preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"remote_run.clean_preprocessor_cache()"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -512,26 +528,19 @@
"source": [
"import sklearn\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder\n",
"from pandas_ml import ConfusionMatrix\n",
"\n",
"df = pd.read_csv(\"https://automldemods.blob.core.windows.net/datasets/PlayaEvents2016,_1.6MB,_3.4k-rows.cleaned.2.tsv\",\n",
" delimiter=\"\\t\", quotechar='\"')\n",
"\n",
"# get integer labels\n",
"le = LabelEncoder()\n",
"le.fit(df[\"Label\"].values)\n",
"y = le.transform(df[\"Label\"].values)\n",
"y = df[\"Label\"].values\n",
"X = df.drop([\"Label\"], axis=1)\n",
"\n",
"_, X_test, _, y_test = train_test_split(X, y, test_size=0.1, random_state=42)\n",
"\n",
"ypred = fitted_model.predict(X_test.values)\n",
"\n",
"ypred_strings = le.inverse_transform(ypred)\n",
"ytest_strings = le.inverse_transform(y_test)\n",
"\n",
"cm = ConfusionMatrix(ytest_strings, ypred_strings)\n",
"cm = ConfusionMatrix(y_test, ypred)\n",
"\n",
"print(cm)\n",
"\n",

View File

@@ -120,7 +120,7 @@
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",

View File

@@ -186,7 +186,7 @@
"metadata": {},
"outputs": [],
"source": [
"dsvm_name = 'mydsvmd'\n",
"dsvm_name = 'mydsvmc'\n",
"\n",
"try:\n",
" while ws.compute_targets[dsvm_name].provisioning_state == 'Creating':\n",
@@ -198,7 +198,9 @@
" print('Creating a new DSVM.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
" dsvm_compute.wait_for_completion(show_output = True)"
" dsvm_compute.wait_for_completion(show_output = True)\n",
" print(\"Waiting one minute for ssh to be accessible\")\n",
" time.sleep(60) # Wait for ssh to be accessible"
]
},
{

View File

@@ -27,22 +27,6 @@
"5. Explore best model's explanation\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install AzureML Explainer SDK "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install azureml_sdk[explain]"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -159,7 +143,7 @@
"|Property|Description|\n",
"|-|-|\n",
"|**task**|classification or regression|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i>|\n",
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
"|**max_time_sec**|Time limit in minutes for each iterations|\n",
"|**iterations**|Number of iterations. In each iteration Auto ML trains the data with a specific pipeline|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
@@ -179,7 +163,7 @@
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" max_time_sec = 12000,\n",
" iteration_timeout_minutes = 200,\n",
" iterations = 10,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
@@ -237,16 +221,6 @@
"RunDetails(local_run).show() "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"child_run = next(local_run.get_children())\n",
"RunDetails(child_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -13,16 +13,18 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 15a: Classification with ensembling on local compute\n",
"# AutoML 17: Classification with Local Compute with Tesnorflow DNNClassifier and LinearClassifier using whitelist models feature.\n",
"\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) to showcase how you can use AutoML for a simple classification problem.\n",
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.\n",
"This trains the model exclusively on tensorflow based models.\n",
"\n",
"In this notebook you will learn how to:\n",
"1. Create an `Experiment` in an existing `Workspace`.\n",
"2. Configure AutoML using `AutoMLConfig` which enables an extra ensembling iteration.\n",
"3. Train the model using local compute.\n",
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model on a whilelisted models using local compute. \n",
"4. Explore the results.\n",
"5. Test the best fitted model.\n"
]
@@ -108,7 +110,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Training Data"
"## Load Training Data\n",
"\n",
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
]
},
{
@@ -121,11 +125,9 @@
"\n",
"digits = datasets.load_digits()\n",
"\n",
"# Exclude the first 50 rows from training so that they can be used for test.\n",
"X_train = digits.data[150:,:]\n",
"y_train = digits.target[150:]\n",
"X_valid = digits.data[50:150]\n",
"y_valid = digits.target[50:150]"
"# Exclude the first 100 rows from training so that they can be used for test.\n",
"X_train = digits.data[100:,:]\n",
"y_train = digits.target[100:]"
]
},
{
@@ -145,10 +147,6 @@
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**X_valid**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y_valid**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n",
"|**enable_ensembling**|Flag to enable an ensembling iteration after all the other iterations complete.|\n",
"|**ensemble_iterations**|Number of iterations during which we choose a fitted pipeline to be part of the final ensemble.|\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
]
},
@@ -159,17 +157,16 @@
"outputs": [],
"source": [
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'classification.log',\n",
" debug_log = 'automl_errors.log',\n",
" primary_metric = 'AUC_weighted',\n",
" iteration_timeout_minutes = 60,\n",
" iterations = 10,\n",
" n_cross_validations = 3,\n",
" verbosity = logging.INFO,\n",
" X = X_train, \n",
" y = y_train,\n",
" X_valid = X_valid,\n",
" y_valid = y_valid,\n",
" enable_ensembling = True,\n",
" ensemble_iterations = 5,\n",
" enable_tf=True,\n",
" whitelist_models=[\"TensorFlowLinearClassifier\", \"TensorFlowDNN\"],\n",
" path = project_folder)"
]
},
@@ -177,7 +174,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train the Model\n",
"## Train the Models\n",
"\n",
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
@@ -198,37 +195,7 @@
"metadata": {},
"outputs": [],
"source": [
"local_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Optionally, you can continue an interrupted local run by calling `continue_experiment` without the `iterations` parameter, or run more iterations for a completed run by specifying the `iterations` parameter:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = local_run.continue_experiment(X = X_train, \n",
" y = y_train,\n",
" X_valid = X_valid,\n",
" y_valid = y_valid,\n",
" show_output = True,\n",
" iterations = 5)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run"
"local_run\n"
]
},
{
@@ -291,7 +258,7 @@
"source": [
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
]
},
{
@@ -370,7 +337,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Testing Our Best Pipeline\n",
"#### Testing Our Best Fitted Model\n",
"We will try to predict 2 digits and see how our model works."
]
},
@@ -397,11 +364,11 @@
"metadata": {
"authors": [
{
"name": "ratanase"
"name": "savitam"
}
],
"kernelspec": {
"display_name": "Python [default]",
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},

View File

@@ -0,0 +1,397 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 18: Energy Demand Forecasting\n",
"\n",
"In this example, we show how AutoML can be used for energy demand forecasting.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Instantiating AutoMLConfig with new task type \"forecasting\" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: \"time_column_name\" \n",
"3. Training the Model using local compute\n",
"4. Exploring the results\n",
"5. Testing the fitted model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import pandas as pd\n",
"import numpy as np\n",
"import os\n",
"import logging\n",
"import warnings\n",
"warnings.showwarning = lambda *args, **kwargs: None\n",
"\n",
"\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = 'automl-energydemandforecasting'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-energydemandforecasting'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Run History Name'] = experiment_name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Read Data\n",
"Read energy demanding data from file, and preview data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.read_csv(\"nyc_energy.csv\", parse_dates=['timeStamp'])\n",
"data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split the data to train and test\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train = data[data['timeStamp'] < '2017-02-01']\n",
"test = data[data['timeStamp'] >= '2017-02-01']\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare the test data, we will feed X_test to the fitted model and get prediction"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_test = test.pop('demand').values\n",
"X_test = test"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split the train data to train and valid\n",
"\n",
"Use one month's data as valid data\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train = train[train['timeStamp'] < '2017-01-01']\n",
"X_valid = train[train['timeStamp'] >= '2017-01-01']\n",
"y_train = X_train.pop('demand').values\n",
"y_valid = X_valid.pop('demand').values\n",
"print(X_train.shape)\n",
"print(y_train.shape)\n",
"print(X_valid.shape)\n",
"print(y_valid.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|forecasting|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"|**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"time_column_name = 'timeStamp'\n",
"automl_settings = {\n",
" \"time_column_name\": time_column_name,\n",
"}\n",
"\n",
"\n",
"automl_config = AutoMLConfig(task = 'forecasting',\n",
" debug_log = 'automl_nyc_energy_errors.log',\n",
" primary_metric='normalized_root_mean_squared_error',\n",
" iterations = 10,\n",
" iteration_timeout_minutes = 5,\n",
" X = X_train,\n",
" y = y_train,\n",
" X_valid = X_valid,\n",
" y_valid = y_valid,\n",
" path=project_folder,\n",
" verbosity = logging.INFO,\n",
" **automl_settings)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"fitted_model.steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"\n",
"Predict on training and test set, and calculate residual values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_pred = fitted_model.predict(X_test)\n",
"y_pred"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define a Check Data Function\n",
"\n",
"Remove the nan values from y_test to avoid error when calculate metrics "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def _check_calc_input(y_true, y_pred, rm_na=True):\n",
" \"\"\"\n",
" Check that 'y_true' and 'y_pred' are non-empty and\n",
" have equal length.\n",
"\n",
" :param y_true: Vector of actual values\n",
" :type y_true: array-like\n",
"\n",
" :param y_pred: Vector of predicted values\n",
" :type y_pred: array-like\n",
"\n",
" :param rm_na:\n",
" If rm_na=True, remove entries where y_true=NA and y_pred=NA.\n",
" :type rm_na: boolean\n",
"\n",
" :return:\n",
" Tuple (y_true, y_pred). if rm_na=True,\n",
" the returned vectors may differ from their input values.\n",
" :rtype: Tuple with 2 entries\n",
" \"\"\"\n",
" if len(y_true) != len(y_pred):\n",
" raise ValueError(\n",
" 'the true values and prediction values do not have equal length.')\n",
" elif len(y_true) == 0:\n",
" raise ValueError(\n",
" 'y_true and y_pred are empty.')\n",
" # if there is any non-numeric element in the y_true or y_pred,\n",
" # the ValueError exception will be thrown.\n",
" y_true = np.array(y_true).astype(float)\n",
" y_pred = np.array(y_pred).astype(float)\n",
" if rm_na:\n",
" # remove entries both in y_true and y_pred where at least\n",
" # one element in y_true or y_pred is missing\n",
" y_true_rm_na = y_true[~(np.isnan(y_true) | np.isnan(y_pred))]\n",
" y_pred_rm_na = y_pred[~(np.isnan(y_true) | np.isnan(y_pred))]\n",
" return (y_true_rm_na, y_pred_rm_na)\n",
" else:\n",
" return y_true, y_pred"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_test,y_pred = _check_calc_input(y_test,y_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calculate metrics for the prediction\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"[Test Data] \\nRoot Mean squared error: %.2f\" % np.sqrt(mean_squared_error(y_test, y_pred)))\n",
"# Explained variance score: 1 is perfect prediction\n",
"print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))\n",
"print('R2 score: %.2f' % r2_score(y_test, y_pred))\n",
"\n",
"\n",
"\n",
"# Plot outputs\n",
"%matplotlib notebook\n",
"test_pred = plt.scatter(y_test, y_pred, color='b')\n",
"test_test = plt.scatter(y_test, y_test, color='g')\n",
"plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "xiaga"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,390 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 18B: Orange Juice Sales Forecasting\n",
"\n",
"In this example, we use AutoML to find and tune a time-series forecasting model.\n",
"\n",
"Make sure you have executed the [configuration notebook](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook, you will:\n",
"1. Create an Experiment in an existing Workspace\n",
"2. Instantiate an AutoMLConfig \n",
"3. Find and train a forecasting model using local compute\n",
"4. Evaluate the performance of the model\n",
"\n",
"## Sample Data\n",
"The examples in the follow code samples use the [University of Chicago's Dominick's Finer Foods dataset](https://research.chicagobooth.edu/kilts/marketing-databases/dominicks) to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment is a named object in a Workspace which represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import pandas as pd\n",
"import numpy as np\n",
"import os\n",
"import logging\n",
"\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = 'automl-ojsalesforecasting'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-ojsalesforecasting'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Run History Name'] = experiment_name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Read Data\n",
"You are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"time_column_name = 'WeekStarting'\n",
"data = pd.read_csv(\"dominicks_OJ.csv\", parse_dates=[time_column_name])\n",
"data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. \n",
"\n",
"The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"grain_column_names = ['Store', 'Brand']\n",
"nseries = data.groupby(grain_column_names).ngroups\n",
"print('Data contains {0} individual time-series.'.format(nseries))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data Splitting\n",
"For the purposes of demonstration and later forecast evaluation, we now split the data into a training and a testing set. The test set will contain the final 20 weeks of observed sales for each time-series."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ntest_periods = 20\n",
"\n",
"def split_last_n_by_grain(df, n):\n",
" \"\"\"\n",
" Group df by grain and split on last n rows for each group\n",
" \"\"\"\n",
" df_grouped = (df.sort_values(time_column_name) # Sort by ascending time\n",
" .groupby(grain_column_names, group_keys=False))\n",
" df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])\n",
" df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])\n",
" return df_head, df_tail\n",
"\n",
"X_train, X_test = split_last_n_by_grain(data, ntest_periods)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Modeling\n",
"\n",
"For forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:\n",
"* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span \n",
"* Impute missing values in the target (via forward-fill) and feature columns (using median column values) \n",
"* Create grain-based features to enable fixed effects across different series\n",
"* Create time-based features to assist in learning seasonal patterns\n",
"* Encode categorical variables to numeric quantities\n",
"\n",
"AutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.\n",
"\n",
"You are almost ready to start an AutoML training job. We will first need to create a validation set from the existing training set (i.e. for hyper-parameter tuning): "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nvalidation_periods = 20\n",
"X_train, X_validate = split_last_n_by_grain(X_train, nvalidation_periods)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We also need to separate the target column from the rest of the DataFrame: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"target_column_name = 'Quantity'\n",
"y_train = X_train.pop(target_column_name).values\n",
"y_validate = X_validate.pop(target_column_name).values "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an AutoMLConfig\n",
"\n",
"The AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, and the training and validation data. \n",
"\n",
"For forecasting tasks, there are some additional parameters that can be set: the name of the input data column, holding the date/time and the grain column names. A time column is required for forecasting, while the grain is optional. If a grain is not given, the forecaster assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak. \n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|forecasting|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
"|**X**|Training matrix of features, shape = [n_training_samples, n_features]|\n",
"|**y**|Target values, shape = [n_training_samples, ]|\n",
"|**X_valid**|Validation matrix of features, shape = [n_validation_samples, n_features]|\n",
"|**y_valid**|Target values for validation, shape = [n_validation_samples, ]\n",
"|**enable_ensembling**|Allow AutoML to create ensembles of the best performing models\n",
"|**debug_log**|Log file path for writing debugging information\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_settings = {\n",
" 'time_column_name': time_column_name,\n",
" 'grain_column_names': grain_column_names,\n",
" 'drop_column_names': ['logQuantity']\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task='forecasting',\n",
" debug_log='automl_oj_sales_errors.log',\n",
" primary_metric='normalized_root_mean_squared_error',\n",
" iterations=10,\n",
" X=X_train,\n",
" y=y_train,\n",
" X_valid=X_validate,\n",
" y_valid=y_validate,\n",
" enable_ensembling=False,\n",
" path=project_folder,\n",
" verbosity=logging.INFO,\n",
" **automl_settings)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.\n",
"Information from each iteration will be printed to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"Each run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_pipeline = local_run.get_output()\n",
"fitted_pipeline.steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Make Predictions from the Best Fitted Model\n",
"Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_test = X_test.pop(target_column_name).values"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_test.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. \n",
"\n",
"The target predictions can be retrieved by calling the `predict` method on the best model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_pred = fitted_pipeline.predict(X_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calculate evaluation metrics for the prediction\n",
"To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def MAPE(actual, pred):\n",
" \"\"\"\n",
" Calculate mean absolute percentage error.\n",
" Remove NA and values where actual is close to zero\n",
" \"\"\"\n",
" not_na = ~(np.isnan(actual) | np.isnan(pred))\n",
" not_zero = ~np.isclose(actual, 0.0)\n",
" actual_safe = actual[not_na & not_zero]\n",
" pred_safe = pred[not_na & not_zero]\n",
" APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)\n",
" return np.mean(APE)\n",
"\n",
"print(\"[Test Data] \\nRoot Mean squared error: %.2f\" % np.sqrt(mean_squared_error(y_test, y_pred)))\n",
"print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))\n",
"print('MAPE: %.2f' % MAPE(y_test, y_pred))"
]
}
],
"metadata": {
"authors": [
{
"name": "erwright"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,6 +1,7 @@
# Table of Contents
1. [Automated ML Introduction](#introduction)
1. [Running samples in Azure Notebooks](#jupyter)
1. [Running samples in Azure Databricks](#databricks)
1. [Running samples in a Local Conda environment](#localconda)
1. [Automated ML SDK Sample Notebooks](#samples)
1. [Documentation](#documentation)
@@ -16,6 +17,9 @@ If you are new to Data Science, AutoML will help you get jumpstarted by simplify
If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
Below are the three execution environments supported by AutoML.
<a name="jupyter"></a>
## Running samples in Azure Notebooks - Jupyter based notebooks in the Azure cloud
@@ -24,9 +28,14 @@ If you are an experienced data scientist, AutoML will help increase your product
1. Follow the instructions in the [../00.configuration](00.configuration.ipynb) notebook to create and connect to a workspace.
1. Open one of the sample notebooks.
**Make sure the Azure Notebook kernel is set to `Python 3.6`** when you open a notebook.
<a name="databricks"></a>
## Running samples in Azure Databricks
![set kernel to Python 3.6](../images/python36.png)
**NOTE**: Please create your Azure Databricks cluster as v4.x (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl_databricks]** as a PyPi library in Azure Databricks workspace.
- Download the sample notebook 16a.auto-ml-classification-local-azuredatabricks from [GitHub](https://github.com/Azure/MachineLearningNotebooks) and import into the Azure databricks workspace.
- Attach the notebook to the cluster.
<a name="localconda"></a>
## Running samples in a Local Conda environment
@@ -57,9 +66,9 @@ There's no need to install mini-conda specifically.
### 3. Setup a new conda environment
The **automl/automl_setup** script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook.
It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. It can take about 10 minutes to execute.
It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. See the specific sections below for Windows, Mac and Linux. It can take about 10 minutes to execute.
## Windows
Start a conda command windows, cd to the **automl** folder where the sample notebooks were extracted and then run:
Start an **Anaconda Prompt** window, cd to the **automl** folder where the sample notebooks were extracted and then run:
```
automl_setup
```
@@ -186,6 +195,19 @@ bash automl_setup_linux.sh
- Enables an extra iteration for generating an Ensemble of models
- Uses remote Linux DSVM for training
- [16a.auto-ml-classification-local-azuredatabricks.ipynb](16a.auto-ml-classification-local-azuredatabricks.ipynb)
- Dataset: scikit learn's [digit dataset](https://innovate.burningman.org/datasets-page/)
- Example of using AutoML for classification using Azure Databricks as the platform for training
- [17.auto-ml-classification_with_tensorflow.ipynb](17.auto-ml-classification_with_tensorflow.ipynb)
- Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)
- Simple example of using Auto ML for classification with whitelisting tensorflow models.checkout
- Uses local compute for training
- [18.auto-ml-timeseries.ipynb](18.auto-ml-timeseries.ipynb)
- Dataset: NYC energy demanding data
- Example of using AutoML for timeseries data training
<a name="documentation"></a>
# Documentation
## Table of Contents
@@ -199,7 +221,7 @@ bash automl_setup_linux.sh
|Property|Description|Default|
|-|-|-|
|**primary_metric**|This is the metric that you want to optimize.<br><br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>balanced_accuracy</i><br><i>average_precision_score_weighted</i><br><i>precision_score_weighted</i><br><br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>| Classification: accuracy <br><br> Regression: spearman_correlation
|**primary_metric**|This is the metric that you want to optimize.<br><br> Classification supports the following primary metrics <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i><br><br> Regression supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i><br><i>normalized_root_mean_squared_log_error</i>| Classification: accuracy <br><br> Regression: spearman_correlation
|**iteration_timeout_minutes**|Time limit in minutes for each iteration|None|
|**iterations**|Number of iterations. In each iteration trains the data with a specific pipeline. To get the best result, use at least 100. |100|
|**n_cross_validations**|Number of cross validation splits|None|
@@ -209,7 +231,7 @@ bash automl_setup_linux.sh
|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> You can set it to *-1* to use all cores|1|
|**experiment_exit_score**|*double* value indicating the target for *primary_metric*. <br> Once the target is surpassed the run terminates|None|
|**blacklist_models**|*Array* of *strings* indicating models to ignore for Auto ML from the list of models.|None|
|**whilelist_models**|*Array* of *strings* use only models listed for Auto ML from the list of models..|None|
|**whitelist_models**|*Array* of *strings* use only models listed for Auto ML from the list of models..|None|
<a name="cvsplits"></a>
## List of models for white list/blacklist
**Classification**
@@ -303,4 +325,3 @@ To resolve this issue, allocate a DSVM with more memory or reduce the value spec
## Iterations show as "Not Responding" in the RunDetails widget.
This can be caused by too many concurrent iterations for a remote DSVM. Each concurrent iteration usually takes 100% of a core when it is running. Some iterations can use multiple cores. So, the max_concurrent_iterations setting should always be less than the number of cores of the DSVM.
To resolve this issue, try reducing the value specified for the max_concurrent_iterations setting.

View File

@@ -8,9 +8,10 @@ dependencies:
- numpy>=1.11.0,<1.15.0
- cython
- urllib3<1.24
- scipy>=0.19.0,<0.20.0
- scipy>=1.0.0,<=1.1.0
- scikit-learn>=0.18.0,<=0.19.1
- pandas>=0.22.0,<0.23.0
- tensorflow>=1.12.0
# Required for azuremlftk
- dill
@@ -23,9 +24,9 @@ dependencies:
- pip:
# Required for azuremlftk
- https://azuremlpackages.blob.core.windows.net/forecasting/azuremlftk-0.1.18313.5a1-py3-none-any.whl
- https://azuremlpackages.blob.core.windows.net/forecasting/azuremlftk-0.1.18323.5a1-py3-none-any.whl
# Required packages for AzureML execution, history, and data preparation.
- azureml-sdk[automl,notebooks]
- azureml-sdk[automl,notebooks,explain]
- pandas_ml

View File

@@ -8,9 +8,10 @@ dependencies:
- numpy>=1.15.3
- cython
- urllib3<1.24
- scipy>=0.19.0,<0.20.0
- scipy>=1.0.0,<=1.1.0
- scikit-learn>=0.18.0,<=0.19.1
- pandas>=0.22.0,<0.23.0
- tensorflow>=1.12.0
# Required for azuremlftk
- dill
@@ -23,9 +24,10 @@ dependencies:
- pip:
# Required for azuremlftk
- https://azuremlpackages.blob.core.windows.net/forecasting/azuremlftk-0.1.18313.5a1-py3-none-any.whl
- https://azuremlpackages.blob.core.windows.net/forecasting/azuremlftk-0.1.18323.5a1-py3-none-any.whl
# Required packages for AzureML execution, history, and data preparation.
- azureml-sdk[automl,notebooks]
- azureml-sdk[automl,notebooks,explain]
- pandas_ml

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -244,7 +244,7 @@
"# Use the default configuration (can also provide parameters to customize)\n",
"prov_config = AksCompute.provisioning_configuration()\n",
"\n",
"aks_name = 'my-aks-test1' \n",
"aks_name = 'my-aks-test2' \n",
"# Create the cluster\n",
"aks_target = ComputeTarget.create(workspace = ws, \n",
" name = aks_name, \n",
@@ -278,9 +278,10 @@
"%%time\n",
"resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n",
"create_name= 'myaks4'\n",
"aks_target = AksCompute.attach(workspace = ws, \n",
"attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
"aks_target = ComputeTarget.attach(workspace = ws, \n",
" name = create_name, \n",
" #esource_id=resource_id)\n",
" attach_configuration=attach_config)\n",
"## Wait for the operation to complete\n",
"aks_target.wait_for_provisioning(True)```"
]

View File

@@ -285,9 +285,10 @@
" %%time\n",
" resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n",
" create_name= 'myaks4'\n",
" aks_target = AksCompute.attach(workspace = ws, \n",
" attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
" aks_target = ComputeTarget.attach(workspace = ws, \n",
" name = create_name, \n",
" resource_id=resource_id)\n",
" attach_configuration=attach_config)\n",
" ## Wait for the operation to complete\n",
" aks_target.wait_for_provisioning(True)```"
]

View File

@@ -224,7 +224,8 @@
"\n",
"create_name='my-existing-aks' \n",
"# Create the cluster\n",
"aks_target = AksCompute.attach(workspace=ws, name=create_name, resource_id=resource_id)\n",
"attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
"aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config)\n",
"# Wait for the operation to complete\n",
"aks_target.wait_for_completion(True)\n",
"'''"

View File

@@ -36,13 +36,13 @@ In this directory, there are two types of notebooks:
* The first type of notebooks will introduce you to core Azure Machine Learning Pipelines features. The notebooks below belong in this category, and are designed to go in sequence:
1. [aml-pipelines-getting-started.ipynb](aml-pipelines-getting-started.ipynb)
2. [aml-pipelines-with-data-dependency-steps.ipynb](aml-pipelines-with-data-dependency-steps.ipynb)
3. [aml-pipelines-publish-and-run-using-rest-endpoint.ipynb](aml-pipelines-publish-and-run-using-rest-endpoint.ipynb)
4. [aml-pipelines-data-transfer.ipynb](aml-pipelines-data-transfer.ipynb)
5. [aml-pipelines-use-databricks-as-compute-target.ipynb](aml-pipelines-use-databricks-as-compute-target.ipynb)
6. [aml-pipelines-use-adla-as-compute-target.ipynb](aml-pipelines-use-adla-as-compute-target.ipynb)
1. aml-pipelines-getting-started.ipynb
2. aml-pipelines-with-data-dependency-steps.ipynb
3. aml-pipelines-publish-and-run-using-rest-endpoint.ipynb
4. aml-pipelines-data-transfer.ipynb
5. aml-pipelines-use-databricks-as-compute-target.ipynb
6. aml-pipelines-use-adla-as-compute-target.ipynb
* The second type of notebooks illustrate more sophisticated scenarios, and are independent of each other. These notebooks include:
- [pipeline-batch-scoring.ipynb](pipeline-batch-scoring.ipynb)
- [pipeline-style-transfer.ipynb](pipeline-style-transfer.ipynb)
- pipeline-batch-scoring.ipynb
- pipeline-style-transfer.ipynb

View File

@@ -0,0 +1,6 @@
## Azure Machine Learning service training examples
These examples show you:
* Distributed training of models on Machine Learning Compute cluster
* Hyperparameter tuning at scale
* Using Tensorboard with Azure ML Python SDK.

View File

@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 02. Distributed PyTorch with Horovod\n",
"# Distributed PyTorch with Horovod\n",
"In this tutorial, you will train a PyTorch model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using distributed training via [Horovod](https://github.com/uber/horovod)."
]
},
@@ -60,6 +60,7 @@
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
@@ -92,7 +93,7 @@
"metadata": {},
"source": [
"## Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an [Azure Batch AI](https://docs.microsoft.com/azure/batch-ai/overview) cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an `AmlCompute` cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"\n",
"**Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process."
]
@@ -115,12 +116,12 @@
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
" max_nodes=6)\n",
" max_nodes=4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
@@ -175,6 +176,7 @@
"outputs": [],
"source": [
"import shutil\n",
"\n",
"shutil.copy('pytorch_horovod_mnist.py', project_folder)"
]
},
@@ -245,7 +247,7 @@
"outputs": [],
"source": [
"run = experiment.submit(estimator)\n",
"print(run.get_details())"
"print(run)"
]
},
{
@@ -263,6 +265,7 @@
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(run).show()"
]
},

View File

@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 04. Distributed Tensorflow with Horovod\n",
"# Distributed Tensorflow with Horovod\n",
"In this tutorial, you will train a word2vec model in TensorFlow using distributed training via [Horovod](https://github.com/uber/horovod)."
]
},
@@ -60,6 +60,7 @@
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
@@ -91,7 +92,7 @@
"metadata": {},
"source": [
"## Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an [Azure Batch AI](https://docs.microsoft.com/azure/batch-ai/overview) cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an `AmlCompute` cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"\n",
"**Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process."
]
@@ -114,12 +115,12 @@
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
" max_nodes=6)\n",
" max_nodes=4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
@@ -255,6 +256,7 @@
"outputs": [],
"source": [
"import shutil\n",
"\n",
"shutil.copy('tf_horovod_word2vec.py', project_folder)"
]
},

View File

@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 05. Distributed TensorFlow with parameter server\n",
"# Distributed TensorFlow with parameter server\n",
"In this tutorial, you will train a TensorFlow model on the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset using native [distributed TensorFlow](https://www.tensorflow.org/deploy/distributed)."
]
},
@@ -60,6 +60,7 @@
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
@@ -91,7 +92,7 @@
"metadata": {},
"source": [
"## Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an [Azure Batch AI](https://docs.microsoft.com/azure/batch-ai/overview) cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an `AmlCompute` cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"\n",
"**Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process."
]
@@ -114,12 +115,12 @@
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
" max_nodes=6)\n",
" max_nodes=4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
@@ -167,6 +168,7 @@
"outputs": [],
"source": [
"import shutil\n",
"\n",
"shutil.copy('tf_mnist_replica.py', project_folder)"
]
},
@@ -262,6 +264,7 @@
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(run).show()"
]
},

View File

@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 41. Export Run History as Tensorboard logs\n",
"# Export Run History as Tensorboard logs\n",
"\n",
"1. Run some training and log some metrics into Run History\n",
"2. Export the run history to some directory as Tensorboard logs\n",
@@ -25,7 +25,10 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]
},
{
@@ -40,6 +43,22 @@
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the Azure ML TensorBoard integration package if you haven't already."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install azureml-contrib-tensorboard"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -13,11 +13,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 40. Tensorboard Integration with Run History\n",
"# Tensorboard Integration with Run History\n",
"\n",
"1. Run a Tensorflow job locally and view its TB output live.\n",
"2. The same, for a DSVM.\n",
"3. And once more, with Batch AI.\n",
"3. And once more, with an AmlCompute cluster.\n",
"4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph."
]
},
@@ -26,7 +26,10 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](00.configuration.ipynb) Notebook first if you haven't."
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
"* Go through the [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook to:\n",
" * install the AML SDK\n",
" * create a workspace and its configuration file (`config.json`)"
]
},
{
@@ -41,6 +44,22 @@
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the Azure ML TensorBoard package."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install azureml-contrib-tensorboard"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -60,7 +79,8 @@
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"set_diagnostics_collection(send_diagnostics = True)"
"\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
{
@@ -256,7 +276,16 @@
"Tensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.\n",
"Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.\n",
"\n",
"If you are unfamiliar with DSVM configuration, check [04. Train in a remote VM (Ubuntu DSVM)](04.train-on-remote-vm.ipynb) for a more detailed breakdown."
"If you are unfamiliar with DSVM configuration, check [04. Train in a remote VM](04.train-on-remote-vm.ipynb) for a more detailed breakdown.\n",
"\n",
"**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node AmlCompute. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note that we only support Linux VMs and the commands below will spin a Linux VM only.\n",
"\n",
"```shell\n",
"# create a DSVM in your resource group\n",
"# note you need to be at least a contributor to the resource group in order to execute this command successfully.\n",
"(myenv) $ az vm create --resource-group <resource_group_name> --name <some_vm_name> --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username <username> --admin-password <password> --generate-ssh-keys --authentication-type password\n",
"```\n",
"You can also use [this url](https://portal.azure.com/#create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal."
]
},
{
@@ -268,16 +297,16 @@
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"compute_target_name = 'cpu-dsvm'\n",
"compute_target_name = 'cpudsvm'\n",
"\n",
"try:\n",
" compute_target = DsvmCompute(workspace = ws, name = compute_target_name)\n",
" compute_target = DsvmCompute(workspace=ws, name=compute_target_name)\n",
" print('found existing:', compute_target.name)\n",
"except ComputeTargetException:\n",
" print('creating new.')\n",
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
" compute_target = DsvmCompute.create(ws, name = compute_target_name, provisioning_configuration = dsvm_config)\n",
"compute_target.wait_for_completion(show_output = True)"
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size=\"Standard_D2_v2\")\n",
" compute_target = DsvmCompute.create(ws, name=compute_target_name, provisioning_configuration=dsvm_config)\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
{
@@ -356,9 +385,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Once more, with a Batch AI cluster\n",
"## Once more, with an AmlCompute cluster\n",
"\n",
"Just to prove we can, let's create a Batch AI cluster using MLC, and run our demo there, as well."
"Just to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well."
]
},
{
@@ -367,25 +396,27 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"clust_name = \"cpucluster\"\n",
"# choose a name for your cluster\n",
"cluster_name = \"cpucluster\"\n",
"\n",
"try:\n",
" # If you already have a cluster named this, we don't need to make a new one.\n",
" cts = ws.compute_targets \n",
" compute_target = cts[clust_name]\n",
" assert compute_target.type == 'BatchAI'\n",
"except:\n",
" # Let's make a new one here.\n",
" provisioning_config = AmlCompute.provisioning_configuration(max_nodes=6, \n",
" vm_size='STANDARD_D2_V2')\n",
" \n",
" compute_target = AmlCompute.create(ws, clust_name, provisioning_config)\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" print('Found existing compute target.')\n",
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', \n",
" max_nodes=4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True, min_node_count=1, timeout_in_minutes=20)\n",
"print(compute_target.name)\n",
"# For a more detailed view of current BatchAI cluster status, use the 'status' property \n",
"# print(compute_target.status.serialize())"
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
]
},
{

View File

@@ -44,7 +44,7 @@ def load_data(data_dir):
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=0)
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
@@ -160,11 +160,30 @@ def fine_tune_model(num_epochs, data_dir, learning_rate, momentum):
return model
def download_data():
"""Download and extract the training data."""
import urllib
from zipfile import ZipFile
# download data
data_file = './hymenoptera_data.zip'
download_url = 'https://download.pytorch.org/tutorial/hymenoptera_data.zip'
urllib.request.urlretrieve(download_url, filename=data_file)
# extract files
with ZipFile(data_file, 'r') as zip:
print('extracting files...')
zip.extractall()
print('finished extracting')
data_dir = zip.namelist()[0]
# delete zip file
os.remove(data_file)
return data_dir
def main():
# get command-line arguments
parser = argparse.ArgumentParser()
parser.add_argument('--data_dir', type=str,
help='directory of training data')
parser.add_argument('--num_epochs', type=int, default=25,
help='number of epochs to train')
parser.add_argument('--output_dir', type=str, help='output directory')
@@ -173,8 +192,9 @@ def main():
parser.add_argument('--momentum', type=float, default=0.9, help='momentum')
args = parser.parse_args()
print("data directory is: " + args.data_dir)
model = fine_tune_model(args.num_epochs, args.data_dir,
data_dir = download_data()
print("data directory is: " + data_dir)
model = fine_tune_model(args.num_epochs, data_dir,
args.learning_rate, args.momentum)
os.makedirs(args.output_dir, exist_ok=True)
torch.save(model, os.path.join(args.output_dir, 'model.pt'))

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

View File

@@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# 01. Train, hyperparameter tune, and deploy with PyTorch\n",
"# Train, hyperparameter tune, and deploy with PyTorch\n",
"\n",
"In this tutorial, you will train, hyperparameter tune, and deploy a PyTorch model using the Azure Machine Learning (AML) Python SDK.\n",
"\n",
@@ -62,6 +62,7 @@
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
@@ -93,7 +94,7 @@
"metadata": {},
"source": [
"## Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an [Azure Batch AI](https://docs.microsoft.com/azure/batch-ai/overview) cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an `AmlCompute` cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace.\n",
"\n",
"**Creation of the cluster takes approximately 5 minutes.** If the cluster is already in your workspace this code will skip the cluster creation process."
]
@@ -116,12 +117,12 @@
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
" max_nodes=6)\n",
" max_nodes=4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
@@ -134,96 +135,6 @@
"The above code creates a GPU cluster. If you instead want to create a CPU cluster, provide a different VM size to the `vm_size` parameter, such as `STANDARD_D2_V2`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Upload training data\n",
"The dataset we will use consists of about 120 training images each for ants and bees, with 75 validation images for each class.\n",
"\n",
"First, download the dataset (located [here](https://download.pytorch.org/tutorial/hymenoptera_data.zip) as a zip file) locally to your current directory and extract the files. This will create a folder called `hymenoptera_data` with two subfolders `train` and `val` that contain the training and validation images, respectively. [Hymenoptera](https://en.wikipedia.org/wiki/Hymenoptera) is the order of insects that includes ants and bees."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import urllib\n",
"from zipfile import ZipFile\n",
"\n",
"# download data\n",
"download_url = 'https://download.pytorch.org/tutorial/hymenoptera_data.zip'\n",
"data_file = './hymenoptera_data.zip'\n",
"urllib.request.urlretrieve(download_url, filename=data_file)\n",
"\n",
"# extract files\n",
"with ZipFile(data_file, 'r') as zip:\n",
" print('extracting files...')\n",
" zip.extractall()\n",
" print('done')\n",
" \n",
"# delete zip file\n",
"os.remove(data_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To make the data accessible for remote training, you will need to upload the data from your local machine to the cloud. AML provides a convenient way to do so via a [Datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data). The datastore provides a mechanism for you to upload/download data, and interact with it from your remote compute targets. \n",
"\n",
"**Note: If your data is already stored in Azure, or you download the data as part of your training script, you will not need to do this step.**\n",
"\n",
"Each workspace is associated with a default datastore. In this tutorial, we will upload the training data to this default datastore."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds = ws.get_default_datastore()\n",
"print(ds.datastore_type, ds.account_name, ds.container_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following code will upload the training data to the path `./hymenoptera_data` on the default datastore."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds.upload(src_dir='./hymenoptera_data', target_path='hymenoptera_data')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's get a reference to the path on the datastore with the training data. We can do so using the `path` method. In the next section, we can then pass this reference to our training script's `--data_dir` argument. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"path_on_datastore = 'hymenoptera_data'\n",
"ds_data = ds.path(path_on_datastore)\n",
"print(ds_data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -252,6 +163,14 @@
"os.makedirs(project_folder, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download training data\n",
"The dataset we will use (located [here](https://download.pytorch.org/tutorial/hymenoptera_data.zip) as a zip file) consists of about 120 training images each for ants and bees, with 75 validation images for each class. [Hymenoptera](https://en.wikipedia.org/wiki/Hymenoptera) is the order of insects that includes ants and bees. We will download and extract the dataset as part of our training script `pytorch_train.py`"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -290,6 +209,7 @@
"outputs": [],
"source": [
"import shutil\n",
"\n",
"shutil.copy('pytorch_train.py', project_folder)"
]
},
@@ -330,8 +250,7 @@
"from azureml.train.dnn import PyTorch\n",
"\n",
"script_params = {\n",
" '--data_dir': ds_data,\n",
" '--num_epochs': 10,\n",
" '--num_epochs': 30,\n",
" '--output_dir': './outputs'\n",
"}\n",
"\n",
@@ -368,6 +287,16 @@
"outputs": [],
"source": [
"run = experiment.submit(estimator)\n",
"print(run)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# to get more details of your run\n",
"print(run.get_details())"
]
},
@@ -386,6 +315,7 @@
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(run).show()"
]
},
@@ -587,7 +517,7 @@
"source": [
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies.create(pip_packages=['azureml-core', 'torch', 'torchvision'])\n",
"myenv = CondaDependencies.create(pip_packages=['azureml-defaults', 'torch', 'torchvision'])\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
" f.write(myenv.serialize_to_string())\n",
@@ -711,7 +641,7 @@
"metadata": {},
"source": [
"### Test the web service\n",
"Finally, let's test our deployed web service. We will send the data as a JSON string to the web service hosted in ACI and use the SDK's `run` API to invoke the service. Here we will take an arbitrary image from our validation data to predict on."
"Finally, let's test our deployed web service. We will send the data as a JSON string to the web service hosted in ACI and use the SDK's `run` API to invoke the service. Here we will take an image from our validation data to predict on."
]
},
{
@@ -724,8 +654,7 @@
"from PIL import Image\n",
"import matplotlib.pyplot as plt\n",
"\n",
"test_img = os.path.join('hymenoptera_data', 'val', 'bees', '10870992_eebeeb3a12.jpg') #arbitary image from val dataset\n",
"plt.imshow(Image.open(test_img))"
"plt.imshow(Image.open('test_img.jpg'))"
]
},
{
@@ -759,7 +688,7 @@
"metadata": {},
"outputs": [],
"source": [
"input_data = preprocess(test_img)\n",
"input_data = preprocess('test_img.jpg')\n",
"result = service.run(input_data=json.dumps({'data': input_data.tolist()}))\n",
"print(result)"
]
@@ -803,7 +732,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
"version": "3.6.6"
},
"msauthor": "minxia"
},

View File

@@ -17,7 +17,7 @@
}
},
"source": [
"# 03. Training, hyperparameter tune, and deploy with TensorFlow\n",
"# Training, hyperparameter tune, and deploy with TensorFlow\n",
"\n",
"## Introduction\n",
"This tutorial shows how to train a simple deep neural network using the MNIST dataset and TensorFlow on Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of `28x28` pixels, representing number from 0 to 9. The goal is to create a multi-class classifier to identify the digit each image represents, and deploy it as a web service in Azure.\n",
@@ -91,6 +91,7 @@
"outputs": [],
"source": [
"from azureml.telemetry import set_diagnostics_collection\n",
"\n",
"set_diagnostics_collection(send_diagnostics=True)"
]
},
@@ -241,7 +242,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In this next step, we will upload the training and test set into the workspace's default datastore, which we will then later be mount on a Batch AI cluster for training."
"In this next step, we will upload the training and test set into the workspace's default datastore, which we will then later be mount on an `AmlCompute` cluster for training."
]
},
{
@@ -257,17 +258,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Batch AI cluster as compute target\n",
"[Batch AI](https://docs.microsoft.com/en-us/azure/batch-ai/overview) is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Batch AI cluster in the current workspace, if it doesn't already exist. We will then run the training script on this compute target."
"## Create a remote compute target\n",
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) to execute your training script on. In this tutorial, you create an `AmlCompute` cluster as your training compute resource. This code creates a cluster for you if it does not already exist in your workspace."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we could not find the cluster with the given name in the previous cell, then we will create a new cluster here. We will create a Batch AI Cluster of `STANDARD_NC6` GPU VMs. This process is broken down into 3 steps:\n",
"If we could not find the cluster with the given name in the previous cell, then we will create a new cluster here. We will create an `AmlCompute` cluster of `STANDARD_NC6` GPU VMs. This process is broken down into 3 steps:\n",
"1. create the configuration (this step is local and only takes a second)\n",
"2. create the Batch AI cluster (this step will take about **20 seconds**)\n",
"2. create the cluster (this step will take about **20 seconds**)\n",
"3. provision the VMs to bring the cluster to the initial size (of 1 in this case). This step will take about **3-5 minutes** and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell"
]
},
@@ -284,24 +285,19 @@
"cluster_name = \"gpucluster\"\n",
"\n",
"try:\n",
" # look for the existing cluster by name\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" if type(compute_target) is AmlCompute:\n",
" print('Found existing compute target {}.'.format(cluster_name))\n",
" else:\n",
" print('{} exists but it is not a Batch AI cluster. Please choose a different name.'.format(cluster_name))\n",
" print('Found existing compute target')\n",
"except ComputeTargetException:\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_NC6\", # GPU-based VM\n",
" #vm_priority='lowpriority', # optional\n",
" max_nodes=6)\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', \n",
" max_nodes=4)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
" \n",
"\n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it uses the scale settings for the cluster\n",
"compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"\n",
"# Use the 'status' property to get a detailed status for the current cluster. \n",
"print(compute_target.status.serialize())"
@@ -311,7 +307,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that you have created the compute target, let's see what the workspace's `compute_targets` property returns. You should now see one entry named 'gpucluster' of type BatchAI."
"Now that you have created the compute target, let's see what the workspace's `compute_targets` property returns. You should now see one entry named 'gpucluster' of type `AmlCompute`."
]
},
{
@@ -340,6 +336,7 @@
"outputs": [],
"source": [
"import shutil\n",
"\n",
"# the training logic is in the tf_mnist.py file.\n",
"shutil.copy('./tf_mnist.py', script_folder)\n",
"\n",
@@ -454,7 +451,7 @@
"As the Run is executed, it will go through the following stages:\n",
"1. Preparing: A docker image is created matching the Python environment specified by the TensorFlow estimator and it will be uploaded to the workspace's Azure Container Registry. This step will only happen once for each Python environment -- the container will then be cached for subsequent runs. Creating and uploading the image takes about **5 minutes**. While the job is preparing, logs are streamed to the run history and can be viewed to monitor the progress of the image creation.\n",
"\n",
"2. Scaling: If the compute needs to be scaled up (i.e. the Batch AI cluster requires more nodes to execute the run than currently available), the Batch AI cluster will attempt to scale up in order to make the required amount of nodes available. Scaling typically takes about **5 minutes**.\n",
"2. Scaling: If the compute needs to be scaled up (i.e. the Batch AI cluster requires more nodes to execute the run than currently available), the cluster will attempt to scale up in order to make the required amount of nodes available. Scaling typically takes about **5 minutes**.\n",
"\n",
"3. Running: All scripts in the script folder are uploaded to the compute target, data stores are mounted/copied and the `entry_script` is executed. While the job is running, stdout and the `./logs` folder are streamed to the run history and can be viewed to monitor the progress of the run.\n",
"\n",
@@ -472,6 +469,7 @@
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(run).show()"
]
},
@@ -555,6 +553,7 @@
"outputs": [],
"source": [
"import os\n",
"\n",
"os.makedirs('./imgs', exist_ok=True)\n",
"metrics = run.get_metrics()\n",
"\n",
@@ -590,7 +589,7 @@
"outputs": [],
"source": [
"# create a model folder in the current directory\n",
"os.makedirs('./model', exist_ok = True)\n",
"os.makedirs('./model', exist_ok=True)\n",
"\n",
"for f in run.get_file_names():\n",
" if f.startswith('outputs/model'):\n",
@@ -616,6 +615,7 @@
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"tf.reset_default_graph()\n",
"\n",
"saver = tf.train.import_meta_graph(\"./model/mnist-tf.model.meta\")\n",
@@ -907,6 +907,7 @@
"outputs": [],
"source": [
"from azureml.core.runconfig import CondaDependencies\n",
"\n",
"cd = CondaDependencies.create()\n",
"cd.add_conda_package('numpy')\n",
"cd.add_tensorflow_conda_package()\n",
@@ -960,6 +961,7 @@
"outputs": [],
"source": [
"from azureml.core.image import ContainerImage\n",
"\n",
"imgconfig = ContainerImage.image_configuration(execution_script=\"score.py\", \n",
" runtime=\"python\", \n",
" conda_file=\"myenv.yml\")"

View File

@@ -190,16 +190,6 @@
"source": [
"## Create Linux DSVM as a compute target\n",
"\n",
"**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node AmlCompute. The DSVMCompute class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs and the commands below will spin a Linux VM only.\n",
"\n",
"```shell\n",
"# create a DSVM in your resource group\n",
"# note you need to be at least a contributor to the resource group in order to execute this command successfully.\n",
"(myenv) $ az vm create --resource-group <resource_group_name> --name <some_vm_name> --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username <username> --admin-password <password> --generate-ssh-keys --authentication-type password\n",
"```\n",
"\n",
"**Note**: You can also use [this url](https://portal.azure.com/#create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal\n",
"\n",
"**Note**: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
" \n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you switch to a different port (such as 5022), you can specify the port number in the provisioning configuration object."
@@ -214,7 +204,7 @@
"from azureml.core.compute import DsvmCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"compute_target_name = 'mydsvm'\n",
"compute_target_name = 'cpudsvm'\n",
"\n",
"try:\n",
" dsvm_compute = DsvmCompute(workspace=ws, name=compute_target_name)\n",
@@ -231,7 +221,13 @@
"metadata": {},
"source": [
"## Attach an existing Linux DSVM\n",
"You can also attach an existing Linux VM as a compute target. The default port is 22."
"You can also attach an existing Linux VM as a compute target. To create one, you can use Azure CLI command:\n",
"\n",
"```\n",
"az vm create -n cpudsvm -l eastus2 -g <my-resource-group> --size Standard_D2_v2 --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --generate-ssh-keys\n",
"```\n",
"\n",
"The ```--generate-ssh-keys``` automatically places the ssh keys to standard location, typically to ~/.ssh folder. The default port is 22."
]
},
{
@@ -241,14 +237,14 @@
"outputs": [],
"source": [
"'''\n",
"from azureml.core.compute import RemoteCompute \n",
"# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase \n",
"attached_dsvm_compute = RemoteCompute.attach(workspace=ws,\n",
" name=\"attached_vm\",\n",
" username='<usename>',\n",
"from azureml.core.compute import ComputeTarget, RemoteCompute \n",
"attach_config = RemoteCompute.attach_configuration(username='<my_username>',\n",
" address='<ip_adress_or_fqdn>',\n",
" ssh_port=22,\n",
" password='<password>')\n",
" private_key_file='./.ssh/id_rsa')\n",
"attached_dsvm_compute = ComputeTarget.attach(workspace=ws,\n",
" name='attached_vm',\n",
" attach_configuration=attach_config)\n",
"attached_dsvm_compute.wait_for_completion(show_output=True)\n",
"'''\n"
]
@@ -589,7 +585,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up compute resource"
"## Clean up compute resource\n",
"\n",
"Use ```detach()``` to detach an existing DSVM from Workspace without deleting it. Use ```delete()``` if you created a new ```DsvmCompute``` and want to delete it."
]
},
{
@@ -598,7 +596,8 @@
"metadata": {},
"outputs": [],
"source": [
"dsvm_compute.delete()"
"# dsvm_compute.detach()\n",
"# dsvm_compute.delete()"
]
}
],
@@ -623,7 +622,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
"version": "3.6.6"
}
},
"nbformat": 4,

View File

@@ -1,124 +0,0 @@
# This is a modified version of https://github.com/pytorch/examples/blob/master/mnist/main.py which is
# licensed under BSD 3-Clause (https://github.com/pytorch/examples/blob/master/LICENSE)
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import os
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def train(args, model, device, train_loader, optimizer, epoch, output_dir):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(args, model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, size_average=False, reduce=True).item() # sum up batch loss
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=10, metavar='N',
help='number of epochs to train (default: 10)')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate (default: 0.01)')
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--output-dir', type=str, default='outputs')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
output_dir = args.output_dir
os.makedirs(output_dir, exist_ok=True)
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=False,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
),
batch_size=args.test_batch_size, shuffle=True, **kwargs)
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch, output_dir)
test(args, model, device, test_loader)
# save model
dummy_input = torch.randn(1, 1, 28, 28, device=device)
model_path = os.path.join(output_dir, 'mnist.onnx')
torch.onnx.export(model, dummy_input, model_path)
if __name__ == '__main__':
main()

File diff suppressed because one or more lines are too long

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -1,336 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure Machine Learning Pipeline with DataTranferStep\n",
"This notebook is used to demonstrate the use of DataTranferStep in Azure Machine Learning Pipeline.\n",
"\n",
"In certain cases, you will need to transfer data from one data location to another. For example, your data may be in Files storage and you may want to move it to Blob storage. Or, if your data is in an ADLS account and you want to make it available in the Blob storage. The built-in **DataTransferStep** class helps you transfer data in these situations.\n",
"\n",
"The below example shows how to move data in an ADLS account to Blob storage."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Azure Machine Learning and Pipeline SDK-specific imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import azureml.core\n",
"from azureml.core.compute import ComputeTarget, DatabricksCompute, DataFactoryCompute\n",
"from azureml.exceptions import ComputeTargetException\n",
"from azureml.core import Workspace, Run, Experiment\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.steps import AdlaStep\n",
"from azureml.core.datastore import Datastore\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.data.sql_data_reference import SqlDataReference\n",
"from azureml.core import attach_legacy_compute_target\n",
"from azureml.data.stored_procedure_parameter import StoredProcedureParameter, StoredProcedureParameterType\n",
"from azureml.pipeline.steps import DataTransferStep\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json\n",
"\n",
"If you don't have a config.json file, please go through the configuration Notebook located here:\n",
"https://github.com/Azure/MachineLearningNotebooks. \n",
"\n",
"This sets you up with a working config file that has information on your workspace, subscription id, etc. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register Datastores\n",
"\n",
"In the code cell below, you will need to fill in the appropriate values for the workspace name, datastore name, subscription id, resource group, store name, tenant id, client id, and client secret that are associated with your ADLS datastore. \n",
"\n",
"For background on registering your data store, consult this article:\n",
"\n",
"https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# un-comment the following and replace the strings with the \n",
"# correct values for your ADLS datastore\n",
"\n",
"# workspace = \"<my-workspace-name>\"\n",
"# datastore_name = \"<my-datastore-name>\" # ADLS datastore name\n",
"# subscription_id = \"<my-subscription-id>\" # subscription id of ADLS account\n",
"# resource_group = \"<my-resource-group>\" # resource group of ADLS account\n",
"# store_name = \"<my-storename>\" # ADLS account name\n",
"# tenant_id = \"<my-tenant-id>\" # tenant id of service principal\n",
"# client_id = \"<my-client-id>\" # client id of service principal\n",
"# client_secret = \"<my-client-secret>\" # the secret of service principal\n",
"\n",
"\n",
"try:\n",
" adls_datastore = Datastore.get(ws, datastore_name)\n",
" print(\"found datastore with name: %s\" % datastore_name)\n",
"except:\n",
" adls_datastore = Datastore.register_azure_data_lake(\n",
" workspace=ws,\n",
" datastore_name=datastore_name,\n",
" subscription_id=subscription_id, # subscription id of ADLS account\n",
" resource_group=resource_group, # resource group of ADLS account\n",
" store_name=store_name, # ADLS account name\n",
" tenant_id=tenant_id, # tenant id of service principal\n",
" client_id=client_id, # client id of service principal\n",
" client_secret=client_secret) # the secret of service principal\n",
" print(\"registered datastore with name: %s\" % datastore_name)\n",
"\n",
"# un-comment the following and replace the strings with the\n",
"# correct values for your blob datastore\n",
"\n",
"# blob_datastore_name = \"<my-blob-datastore-name>\"\n",
"# account_name = \"<my-blob-account-name>\"\n",
"# container_name = \"<my-blob-container-name>\"\n",
"# account_key = \"<my-blob-account-key>\"\n",
"\n",
"try:\n",
" blob_datastore = Datastore.get(ws, blob_datastore_name)\n",
" print(\"found blob datastore with name: %s\" % blob_datastore_name)\n",
"except:\n",
" blob_datastore = Datastore.register_azure_blob_container(\n",
" workspace=ws,\n",
" datastore_name=blob_datastore_name,\n",
" account_name=account_name, # Storage account name\n",
" container_name=container_name, # Name of Azure blob container\n",
" account_key=account_key) # Storage account key\"\n",
" print(\"registered blob datastore with name: %s\" % blob_datastore_name)\n",
"\n",
"# CLI:\n",
"# az ml datastore register-blob -n <datastore-name> -a <account-name> -c <container-name> -k <account-key> [-t <sas-token>]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create DataReferences"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"adls_datastore = Datastore(workspace=ws, name=\"MyAdlsDatastore\")\n",
"\n",
"# adls\n",
"adls_data_ref = DataReference(\n",
" datastore=adls_datastore,\n",
" data_reference_name=\"adls_test_data\",\n",
" path_on_datastore=\"testdata\")\n",
"\n",
"blob_datastore = Datastore(workspace=ws, name=\"MyBlobDatastore\")\n",
"\n",
"# blob data\n",
"blob_data_ref = DataReference(\n",
" datastore=blob_datastore,\n",
" data_reference_name=\"blob_test_data\",\n",
" path_on_datastore=\"testdata\")\n",
"\n",
"print(\"obtained adls, blob data references\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Data Factory Account"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data_factory_name = 'adftest'\n",
"\n",
"def get_or_create_data_factory(workspace, factory_name):\n",
" try:\n",
" return DataFactoryCompute(workspace, factory_name)\n",
" except ComputeTargetException as e:\n",
" if 'ComputeTargetNotFound' in e.message:\n",
" print('Data factory not found, creating...')\n",
" provisioning_config = DataFactoryCompute.provisioning_configuration()\n",
" data_factory = ComputeTarget.create(workspace, factory_name, provisioning_config)\n",
" data_factory.wait_for_provisioning()\n",
" return data_factory\n",
" else:\n",
" raise e\n",
" \n",
"data_factory_compute = get_or_create_data_factory(ws, data_factory_name)\n",
"\n",
"print(\"setup data factory account complete\")\n",
"\n",
"# CLI:\n",
"# Create: az ml computetarget setup datafactory -n <name>\n",
"# BYOC: az ml computetarget attach datafactory -n <name> -i <resource-id>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a DataTransferStep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**DataTransferStep** is used to transfer data between Azure Blob, Azure Data Lake Store, and Azure SQL database.\n",
"\n",
"- **name:** Name of module\n",
"- **source_data_reference:** Input connection that serves as source of data transfer operation.\n",
"- **destination_data_reference:** Input connection that serves as destination of data transfer operation.\n",
"- **compute_target:** Azure Data Factory to use for transferring data.\n",
"- **allow_reuse:** Whether the step should reuse results of previous DataTransferStep when run with same inputs. Set as False to force data to be transferred again.\n",
"\n",
"Optional arguments to explicitly specify whether a path corresponds to a file or a directory. These are useful when storage contains both file and directory with the same name or when creating a new destination path.\n",
"\n",
"- **source_reference_type:** An optional string specifying the type of source_data_reference. Possible values include: 'file', 'directory'. When not specified, we use the type of existing path or directory if it's a new path.\n",
"- **destination_reference_type:** An optional string specifying the type of destination_data_reference. Possible values include: 'file', 'directory'. When not specified, we use the type of existing path or directory if it's a new path."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"transfer_adls_to_blob = DataTransferStep(\n",
" name=\"transfer_adls_to_blob\",\n",
" source_data_reference=adls_data_ref,\n",
" destination_data_reference=blob_data_ref,\n",
" compute_target=data_factory_compute)\n",
"\n",
"print(\"data transfer step created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build and Submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline = Pipeline(\n",
" description=\"data_transfer_101\",\n",
" workspace=ws,\n",
" steps=[transfer_adls_to_blob])\n",
"\n",
"pipeline_run = Experiment(ws, \"Data_Transfer_example\").submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next: Databricks as a Compute Target\n",
"To use Databricks as a compute target from Azure Machine Learning Pipeline, a DatabricksStep is used. This [notebook](./aml-pipelines-use-databricks-as-compute-target.ipynb) demonstrates the use of a DatabricksStep in an Azure Machine Learning Pipeline."
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,631 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure Machine Learning Pipelines: Getting Started\n",
"\n",
"## Overview\n",
"\n",
"Read [Azure Machine Learning Pipelines](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines) overview, or the [readme article](./README.md) on Azure Machine Learning Pipelines to get more information.\n",
" \n",
"\n",
"This Notebook shows basic construction of a **pipeline** that runs jobs unattended in different compute clusters. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites and Azure Machine Learning Basics\n",
"Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installing Packages\n",
"These packages are used at later stages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install pandas\n",
"!pip install requests"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enabling Widgets\n",
"\n",
"Install the following jupyter extensions to support Azure Machine Learning widgets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install azureml.widgets\n",
"!jupyter nbextension install --py --user azureml.widgets\n",
"!jupyter nbextension enable --py --user azureml.widgets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Azure Machine Learning Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace, Run, Experiment, Datastore\n",
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.compute import DataFactoryCompute\n",
"from azureml.widgets import RunDetails\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pipeline SDK-specific imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.core import Pipeline, PipelineData, StepSequence\n",
"from azureml.pipeline.steps import PythonScriptStep\n",
"from azureml.pipeline.steps import DataTransferStep\n",
"from azureml.pipeline.core import PublishedPipeline\n",
"from azureml.pipeline.core.graph import PipelineParameter\n",
"\n",
"print(\"Pipeline SDK-specific imports completed\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize Workspace\n",
"\n",
"Initialize a [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class%29) object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')\n",
"\n",
"# Default datastore (Azure file storage)\n",
"def_file_store = ws.get_default_datastore() \n",
"# The above call is equivalent to Datastore(ws, \"workspacefilestore\") or simply Datastore(ws)\n",
"print(\"Default datastore's name: {}\".format(def_file_store.name))\n",
"\n",
"# Blob storage associated with the workspace\n",
"# The following call GETS the Azure Blob Store associated with your workspace.\n",
"# Note that workspaceblobstore is **the name of this store and CANNOT BE CHANGED and must be used as is** \n",
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
"print(\"Blobstore's name: {}\".format(def_blob_store.name))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# project folder\n",
"project_folder = '.'\n",
" \n",
"print('Sample projects will be created in {}.'.format(project_folder))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Required data and script files for the the tutorial\n",
"Sample files required to finish this tutorial are already copied to the project folder specified above. Even though the .py provided in the samples don't have much \"ML work,\" as a data scientist, you will work on this extensively as part of your work. To complete this tutorial, the contents of these files are not very important. The one-line files are for demostration purpose only."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Datastore concepts\n",
"A [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore(class) is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target. \n",
"\n",
"A Datastore can either be backed by an Azure File Storage (default) or by an Azure Blob Storage.\n",
"\n",
"In this next step, we will upload the training and test set into the workspace's default storage (File storage), and another piece of data to Azure Blob Storage. When to use [Azure Blobs](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction), [Azure Files](https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction), or [Azure Disks](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/managed-disks-overview) is [detailed here](https://docs.microsoft.com/en-us/azure/storage/common/storage-decide-blobs-files-disks).\n",
"\n",
"**Please take good note of the concept of the datastore.**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Upload data to default datastore\n",
"Default datastore on workspace is the Azure File storage. The workspace has a Blob storage associated with it as well. Let's upload a file to each of these storages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get_default_datastore() gets the default Azure File Store associated with your workspace.\n",
"# Here we are reusing the def_file_store object we obtained earlier\n",
"\n",
"# target_path is the directory at the destination\n",
"def_file_store.upload_files(['./20news.pkl'], \n",
" target_path = '20newsgroups', \n",
" overwrite = True, \n",
" show_progress = True)\n",
"\n",
"# Here we are reusing the def_blob_store we created earlier\n",
"def_blob_store.upload_files([\"./20news.pkl\"], target_path=\"20newsgroups\", overwrite=True)\n",
"\n",
"print(\"Upload calls completed\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### (Optional) See your files using Azure Portal\n",
"Once you successfully uploaded the files, you can browse to them (or upload more files) using [Azure Portal](https://portal.azure.com). At the portal, make sure you have selected **AzureML Nursery** as your subscription (click *Resource Groups* and then select the subscription). Then look for your **Machine Learning Workspace** (it has your *alias* as the name). It has a link to your storage. Click on the storage link. It will take you to a page where you can see [Blobs](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction), [Files](https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction), [Tables](https://docs.microsoft.com/en-us/azure/storage/tables/table-storage-overview), and [Queues](https://docs.microsoft.com/en-us/azure/storage/queues/storage-queues-introduction). We have just uploaded a file to the Blob storage and another one to the File storage. You should be able to see both of these files in their respective locations. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compute Targets\n",
"A compute target specifies where to execute your program such as a remote Docker on a VM, or a cluster. A compute target needs to be addressable and accessible by you.\n",
"\n",
"**You need at least one compute target to send your payload to. We are planning to use Azure Machine Learning Compute exclusively for this tutorial for all steps. However in some cases you may require multiple compute targets as some steps may run in one compute target like Azure Machine Learning Compute, and some other steps in the same pipeline could run in a different compute target.**\n",
"\n",
"*The example belows show creating/retrieving/attaching to an Azure Machine Learning Compute instance.*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### List of Compute Targets on the workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cts = ws.compute_targets\n",
"for ct in cts:\n",
" print(ct)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve or create a Azure Machine Learning compute\n",
"Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Azure Machine Learning Compute in the current workspace, if it doesn't already exist. We will then run the training script on this compute target.\n",
"\n",
"If we could not find the compute with the given name in the previous cell, then we will create a new compute here. We will create an Azure Machine Learning Compute containing **STANDARD_D2_V2 CPU VMs**. This process is broken down into the following steps:\n",
"\n",
"1. Create the configuration\n",
"2. Create the Azure Machine Learning compute\n",
"\n",
"**This process will take about 3 minutes and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"aml_compute_target = \"aml-compute\"\n",
"try:\n",
" aml_compute = AmlCompute(ws, aml_compute_target)\n",
" print(\"found existing compute target.\")\n",
"except:\n",
" print(\"creating new compute target\")\n",
" \n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n",
" min_nodes = 1, \n",
" max_nodes = 4) \n",
" aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n",
" aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
"print(\"Azure Machine Learning Compute attached\")\n",
"# For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property \n",
"print(aml_compute.status.serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Wait for this call to finish before proceeding (you will see the asterisk turning to a number).**\n",
"\n",
"Now that you have created the compute target, let's see what the workspace's compute_targets() function returns. You should now see one entry named 'amlcompute' of type AmlCompute."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Now that we have completed learning the basics of Azure Machine Learning (AML), let's go ahead and start understanding the Pipeline concepts.**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating a Step in a Pipeline\n",
"A Step is a unit of execution. Step typically needs a target of execution (compute target), a script to execute, and may require script arguments and inputs, and can produce outputs. The step also could take a number of other parameters. Azure Machine Learning Pipelines provides the following built-in Steps:\n",
"\n",
"- [**PythonScriptStep**](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep?view=azure-ml-py): Add a step to run a Python script in a Pipeline.\n",
"- [**AdlaStep**](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.adla_step.adlastep?view=azure-ml-py): Adds a step to run U-SQL script using Azure Data Lake Analytics.\n",
"- [**DataTransferStep**](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.data_transfer_step.datatransferstep?view=azure-ml-py): Transfers data between Azure Blob and Data Lake accounts.\n",
"- [**DatabricksStep**](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep?view=azure-ml-py): Adds a DataBricks notebook as a step in a Pipeline.\n",
"- [**HyperDriveStep**](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.hyper_drive_step.hyperdrivestep?view=azure-ml-py): Creates a Hyper Drive step for Hyper Parameter Tuning in a Pipeline.\n",
"\n",
"The following code will create a PythonScriptStep to be executed in the Azure Machine Learning Compute we created above using train.py, one of the files already made available in the project folder.\n",
"\n",
"A **PythonScriptStep** is a basic, built-in step to run a Python Script on a compute target. It takes a script name and optionally other parameters like arguments for the script, compute target, inputs and outputs. If no compute target is specified, default compute target for the workspace is used."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Uses default values for PythonScriptStep construct.\n",
"\n",
"# Syntax\n",
"# PythonScriptStep(\n",
"# script_name, \n",
"# name=None, \n",
"# arguments=None, \n",
"# compute_target=None, \n",
"# runconfig=None, \n",
"# inputs=None, \n",
"# outputs=None, \n",
"# params=None, \n",
"# source_directory=None, \n",
"# allow_reuse=True, \n",
"# version=None, \n",
"# hash_paths=None)\n",
"# This returns a Step\n",
"step1 = PythonScriptStep(name=\"train_step\",\n",
" script_name=\"train.py\", \n",
" compute_target=aml_compute, \n",
" source_directory=project_folder,\n",
" allow_reuse=False)\n",
"print(\"Step1 created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note:** In the above call to PythonScriptStep(), the flag *allow_reuse* determines whether the step should reuse previous results when run with the same settings/inputs. This flag's default value is *True*; the default is set to *True* because, when inputs and parameters have not changed, we typically do not want to re-run a given pipeline step. \n",
"\n",
"If *allow_reuse* is set to *False*, a new run will always be generated for this step during pipeline execution. The *allow_reuse* flag can come in handy in situations where you do *not* want to re-run a pipeline step."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Running a few steps in parallel\n",
"Here we are looking at a simple scenario where we are running a few steps (all involving PythonScriptStep) in parallel. Running nodes in **parallel** is the default behavior for steps in a pipeline.\n",
"\n",
"We already have one step defined earlier. Let's define few more steps."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# All steps use files already available in the project_folder\n",
"# All steps use the same Azure Machine Learning compute target as well\n",
"step2 = PythonScriptStep(name=\"compare_step\",\n",
" script_name=\"compare.py\", \n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
"\n",
"step3 = PythonScriptStep(name=\"extract_step\",\n",
" script_name=\"extract.py\", \n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
"\n",
"# list of steps to run\n",
"steps = [step1, step2, step3]\n",
"print(\"Step lists created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build the pipeline\n",
"Once we have the steps (or steps collection), we can build the [pipeline](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py). By deafult, all these steps will run in **parallel** once we submit the pipeline for run.\n",
"\n",
"A pipeline is created with a list of steps and a workspace. Submit a pipeline using [submit](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment%28class%29?view=azure-ml-py#submit). When submit is called, a [PipelineRun](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinerun?view=azure-ml-py) is created which in turn creates [StepRun](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.steprun?view=azure-ml-py) objects for each step in the workflow."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Syntax\n",
"# Pipeline(workspace, \n",
"# steps, \n",
"# description=None, \n",
"# default_datastore_name=None, \n",
"# default_source_directory=None, \n",
"# resolve_closure=True, \n",
"# _workflow_provider=None, \n",
"# _service_endpoint=None)\n",
"\n",
"pipeline1 = Pipeline(workspace=ws, steps=steps)\n",
"print (\"Pipeline is built\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Validate the pipeline\n",
"You have the option to [validate](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py#validate) the pipeline prior to submitting for run. The platform runs validation steps such as checking for circular dependencies and parameter checks etc. even if you do not explicitly call validate method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline1.validate()\n",
"print(\"Pipeline validation complete\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the pipeline\n",
"[Submitting](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline.pipeline?view=azure-ml-py#submit) the pipeline involves creating an [Experiment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment?view=azure-ml-py) object and providing the built pipeline for submission. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Submit syntax\n",
"# submit(experiment_name, \n",
"# pipeline_parameters=None, \n",
"# continue_on_node_failure=False, \n",
"# regenerate_outputs=False)\n",
"\n",
"pipeline_run1 = Experiment(ws, 'Hello_World1').submit(pipeline1, regenerate_outputs=True)\n",
"print(\"Pipeline is submitted for execution\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note:** If regenerate_outputs is set to True, a new submit will always force generation of all step outputs, and disallow data reuse for any step of this run. Once this run is complete, however, subsequent runs may reuse the results of this run.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Examine the pipeline run\n",
"\n",
"#### Use RunDetails Widget\n",
"We are going to use the RunDetails widget to examine the run of the pipeline. You can click each row below to get more details on the step runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"RunDetails(pipeline_run1).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Use Pipeline SDK objects\n",
"You can cycle through the node_run objects and examine job logs, stdout, and stderr of each of the steps."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"step_runs = pipeline_run1.get_children()\n",
"for step_run in step_runs:\n",
" status = step_run.get_status()\n",
" print('Script:', step_run.name, 'status:', status)\n",
" \n",
" # Change this if you want to see details even if the Step has succeeded.\n",
" if status == \"Failed\":\n",
" joblog = step_run.get_job_log()\n",
" print('job log:', joblog)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Get additonal run details\n",
"If you wait until the pipeline_run is finished, you may be able to get additional details on the run. **Since this is a blocking call, the following code is commented out.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#pipeline_run1.wait_for_completion()\n",
"#for step_run in pipeline_run1.get_children():\n",
"# print(\"{}: {}\".format(step_run.name, step_run.get_metrics()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Running a few steps in sequence\n",
"Now let's see how we run a few steps in sequence. We already have three steps defined earlier. Let's *reuse* those steps for this part.\n",
"\n",
"We will reuse step1, step2, step3, but build the pipeline in such a way that we chain step3 after step2 and step2 after step1. Note that there is no explicit data dependency between these steps, but still steps can be made dependent by using the [run_after](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.builder.pipelinestep?view=azure-ml-py#run-after) construct."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"step2.run_after(step1)\n",
"step3.run_after(step2)\n",
"\n",
"# Try a loop\n",
"#step2.run_after(step3)\n",
"\n",
"# Now, construct the pipeline using the steps.\n",
"\n",
"# We can specify the \"final step\" in the chain, \n",
"# Pipeline will take care of \"transitive closure\" and \n",
"# figure out the implicit or explicit dependencies\n",
"# https://www.geeksforgeeks.org/transitive-closure-of-a-graph/\n",
"pipeline2 = Pipeline(workspace=ws, steps=[step3])\n",
"print (\"Pipeline is built\")\n",
"\n",
"pipeline2.validate()\n",
"print(\"Simple validation complete\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_run2 = Experiment(ws, 'Hello_World2').submit(pipeline2)\n",
"print(\"Pipeline is submitted for execution\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"RunDetails(pipeline_run2).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next: Pipelines with data dependency\n",
"The next [notebook](./aml-pipelines-with-data-dependency-steps.ipynb) demostrates how to construct a pipeline with data dependency."
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,358 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to Publish a Pipeline and Invoke the REST endpoint\n",
"In this notebook, we will see how we can publish a pipeline and then invoke the REST endpoint."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites and Azure Machine Learning Basics\n",
"Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. \n",
"\n",
"### Initialization Steps"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace, Run, Experiment, Datastore\n",
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.compute import DataFactoryCompute\n",
"from azureml.widgets import RunDetails\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)\n",
"\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.core import Pipeline, PipelineData, StepSequence\n",
"from azureml.pipeline.steps import PythonScriptStep\n",
"from azureml.pipeline.steps import DataTransferStep\n",
"from azureml.pipeline.core import PublishedPipeline\n",
"from azureml.pipeline.core.graph import PipelineParameter\n",
"\n",
"print(\"Pipeline SDK-specific imports completed\")\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')\n",
"\n",
"# Default datastore (Azure file storage)\n",
"def_file_store = ws.get_default_datastore() \n",
"print(\"Default datastore's name: {}\".format(def_file_store.name))\n",
"\n",
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
"print(\"Blobstore's name: {}\".format(def_blob_store.name))\n",
"\n",
"# project folder\n",
"project_folder = '.'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compute Targets\n",
"#### Retrieve an already attached Azure Machine Learning Compute"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"aml_compute_target = \"aml-compute\"\n",
"try:\n",
" aml_compute = AmlCompute(ws, aml_compute_target)\n",
" print(\"found existing compute target.\")\n",
"except:\n",
" print(\"creating new compute target\")\n",
" \n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n",
" min_nodes = 1, \n",
" max_nodes = 4) \n",
" aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n",
" aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
"print(aml_compute.status.serialize())\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building Pipeline Steps with Inputs and Outputs\n",
"As mentioned earlier, a step in the pipeline can take data as input. This data can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reference the data uploaded to blob storage using DataReference\n",
"# Assign the datasource to blob_input_data variable\n",
"blob_input_data = DataReference(\n",
" datastore=def_blob_store,\n",
" data_reference_name=\"test_data\",\n",
" path_on_datastore=\"20newsgroups/20news.pkl\")\n",
"print(\"DataReference object created\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Define intermediate data using PipelineData\n",
"processed_data1 = PipelineData(\"processed_data1\",datastore=def_blob_store)\n",
"print(\"PipelineData object created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Define a Step that consumes a datasource and produces intermediate data.\n",
"In this step, we define a step that consumes a datasource and produces intermediate data.\n",
"\n",
"**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# trainStep consumes the datasource (Datareference) in the previous step\n",
"# and produces processed_data1\n",
"trainStep = PythonScriptStep(\n",
" script_name=\"train.py\", \n",
" arguments=[\"--input_data\", blob_input_data, \"--output_train\", processed_data1],\n",
" inputs=[blob_input_data],\n",
" outputs=[processed_data1],\n",
" compute_target=aml_compute, \n",
" source_directory=project_folder\n",
")\n",
"print(\"trainStep created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Define a Step that consumes intermediate data and produces intermediate data\n",
"In this step, we define a step that consumes an intermediate data and produces intermediate data.\n",
"\n",
"**Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# extractStep to use the intermediate data produced by step4\n",
"# This step also produces an output processed_data2\n",
"processed_data2 = PipelineData(\"processed_data2\", datastore=def_blob_store)\n",
"\n",
"extractStep = PythonScriptStep(\n",
" script_name=\"extract.py\",\n",
" arguments=[\"--input_extract\", processed_data1, \"--output_extract\", processed_data2],\n",
" inputs=[processed_data1],\n",
" outputs=[processed_data2],\n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
"print(\"extractStep created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Define a Step that consumes multiple intermediate data and produces intermediate data\n",
"In this step, we define a step that consumes multiple intermediate data and produces intermediate data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### PipelineParameter"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This step also has a [PipelineParameter](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.pipelineparameter?view=azure-ml-py) argument that help with calling the REST endpoint of the published pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We will use this later in publishing pipeline\n",
"pipeline_param = PipelineParameter(name=\"pipeline_arg\", default_value=10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Now define step6 that takes two inputs (both intermediate data), and produce an output\n",
"processed_data3 = PipelineData(\"processed_data3\", datastore=def_blob_store)\n",
"\n",
"\n",
"\n",
"compareStep = PythonScriptStep(\n",
" script_name=\"compare.py\",\n",
" arguments=[\"--compare_data1\", processed_data1, \"--compare_data2\", processed_data2, \"--output_compare\", processed_data3, \"--pipeline_param\", pipeline_param],\n",
" inputs=[processed_data1, processed_data2],\n",
" outputs=[processed_data3], \n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
"print(\"compareStep created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build the pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline1 = Pipeline(workspace=ws, steps=[compareStep])\n",
"print (\"Pipeline is built\")\n",
"\n",
"pipeline1.validate()\n",
"print(\"Simple validation complete\") "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Publish the pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"published_pipeline1 = pipeline1.publish(name=\"My_New_Pipeline\", description=\"My Published Pipeline Description\")\n",
"print(published_pipeline1.id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run published pipeline using its REST endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.authentication import AzureCliAuthentication\n",
"import requests\n",
"\n",
"cli_auth = AzureCliAuthentication()\n",
"aad_token = cli_auth.get_authentication_header()\n",
"\n",
"rest_endpoint1 = published_pipeline1.endpoint\n",
"\n",
"print(rest_endpoint1)\n",
"\n",
"# specify the param when running the pipeline\n",
"response = requests.post(rest_endpoint1, \n",
" headers=aad_token, \n",
" json={\"ExperimentName\": \"My_Pipeline1\",\n",
" \"RunSource\": \"SDK\",\n",
" \"ParameterAssignments\": {\"pipeline_arg\": 45}})\n",
"run_id = response.json()[\"Id\"]\n",
"\n",
"print(run_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next: Data Transfer\n",
"The next [notebook](./aml-pipelines-data-transfer.ipynb) will showcase data transfer steps between different types of data stores."
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,348 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AML Pipeline with AdlaStep\n",
"This notebook is used to demonstrate the use of AdlaStep in AML Pipeline."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AML and Pipeline SDK-specific imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import azureml.core\n",
"from azureml.core.compute import ComputeTarget, DatabricksCompute\n",
"from azureml.exceptions import ComputeTargetException\n",
"from azureml.core import Workspace, Run, Experiment\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.steps import AdlaStep\n",
"from azureml.core.datastore import Datastore\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.core import attach_legacy_compute_target\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"script_folder = '.'\n",
"experiment_name = \"adla_101_experiment\"\n",
"ws._initialize_folder(experiment_name=experiment_name, directory=script_folder)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register Datastore"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# un-comment the following and replace the strings with the \n",
"# correct values for your ADLS datastore\n",
"\n",
"# workspace=\"<my-workspace-name\"\n",
"# datastore_name= \"<my-adls-datastore-name>\"\n",
"# subscription_id = \"<my-subscription-id>\"\n",
"# resource_group = \"<my-rg>\"\n",
"# store_name = \"<my-sotrename>\"\n",
"# tenant_id = \"<my-tenant>\"\n",
"# client_id = \"<my-client-id>\"\n",
"# client_secret = \"<my-client-secret>\"\n",
"\n",
"\n",
"try:\n",
" adls_datastore = Datastore.get(ws, datastore_name)\n",
" print(\"found datastore with name: %s\" % datastore_name)\n",
"except:\n",
" adls_datastore = Datastore.register_azure_data_lake(\n",
" workspace=ws,\n",
" datastore_name=datastore_name,\n",
" subscription_id=subscription_id, # subscription id of ADLS account\n",
" resource_group=resource_group, # resource group of ADLS account\n",
" store_name=store_name, # ADLS account name\n",
" tenant_id=tenant_id, # tenant id of service principal\n",
" client_id=client_id, # client id of service principal\n",
" client_secret=client_secret) # the secret of service principal\n",
" print(\"registered datastore with name: %s\" % datastore_name)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create DataReferences and PipelineData\n",
"\n",
"In the code cell below, replace datastorename with your default datastore name. Copy the file `testdata.txt` (located in the pipeline folder that this notebook is in) to the path on the datastore."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"datastorename = \"TestAdlsDatastore\"\n",
"\n",
"adls_datastore = Datastore(workspace=ws, name=datastorename)\n",
"script_input = DataReference(\n",
" datastore=adls_datastore,\n",
" data_reference_name=\"script_input\",\n",
" path_on_datastore=\"testdata/testdata.txt\")\n",
"\n",
"script_output = PipelineData(\"script_output\", datastore=adls_datastore)\n",
"\n",
"print(\"Created Pipeline Data\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Data Lake Account\n",
"\n",
"ADLA can only use data that is located in the default data store associated with that ADLA account. Through Azure portal, check the name of the default data store corresponding to the ADLA account you are using below. Replace the value associated with `adla_compute_name` in the code cell below accordingly."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"adla_compute_name = 'testadl' # Replace this with your default compute\n",
"\n",
"from azureml.core.compute import ComputeTarget, AdlaCompute\n",
"\n",
"def get_or_create_adla_compute(workspace, compute_name):\n",
" try:\n",
" return AdlaCompute(workspace, compute_name)\n",
" except ComputeTargetException as e:\n",
" if 'ComputeTargetNotFound' in e.message:\n",
" print('adla compute not found, creating...')\n",
" provisioning_config = AdlaCompute.provisioning_configuration()\n",
" adla_compute = ComputeTarget.create(workspace, compute_name, provisioning_config)\n",
" adla_compute.wait_for_completion()\n",
" return adla_compute\n",
" else:\n",
" raise e\n",
" \n",
"adla_compute = get_or_create_adla_compute(ws, adla_compute_name)\n",
"\n",
"# CLI:\n",
"# Create: az ml computetarget setup adla -n <name>\n",
"# BYOC: az ml computetarget attach adla -n <name> -i <resource-id>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once the above code cell completes, run the below to check your ADLA compute status:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"ADLA compute state:{}\".format(adla_compute.provisioning_state))\n",
"print(\"ADLA compute state:{}\".format(adla_compute.provisioning_errors))\n",
"print(\"Using ADLA compute:{}\".format(adla_compute.cluster_resource_id))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an AdlaStep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**AdlaStep** is used to run U-SQL script using Azure Data Lake Analytics.\n",
"\n",
"- **name:** Name of module\n",
"- **script_name:** name of U-SQL script\n",
"- **inputs:** List of input port bindings\n",
"- **outputs:** List of output port bindings\n",
"- **adla_compute:** the ADLA compute to use for this job\n",
"- **params:** Dictionary of name-value pairs to pass to U-SQL job *(optional)*\n",
"- **degree_of_parallelism:** the degree of parallelism to use for this job *(optional)*\n",
"- **priority:** the priority value to use for the current job *(optional)*\n",
"- **runtime_version:** the runtime version of the Data Lake Analytics engine *(optional)*\n",
"- **root_folder:** folder that contains the script, assemblies etc. *(optional)*\n",
"- **hash_paths:** list of paths to hash to detect a change (script file is always hashed) *(optional)*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"adla_step = AdlaStep(\n",
" name='adla_script_step',\n",
" script_name='test_adla_script.usql',\n",
" inputs=[script_input],\n",
" outputs=[script_output],\n",
" compute_target=adla_compute)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build and Submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline = Pipeline(\n",
" description=\"adla_102\",\n",
" workspace=ws, \n",
" steps=[adla_step],\n",
" default_source_directory=script_folder)\n",
"\n",
"pipeline_run = Experiment(workspace, experiment_name).submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Examine the run\n",
"You can cycle through the node_run objects and examine job logs, stdout, and stderr of each of the steps."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"step_runs = pipeline_run.get_children()\n",
"for step_run in step_runs:\n",
" status = step_run.get_status()\n",
" print('node', step_run.name, 'status:', status)\n",
" if status == \"Failed\":\n",
" joblog = step_run.get_job_log()\n",
" print('job log:', joblog)\n",
" stdout_log = step_run.get_stdout_log()\n",
" print('stdout log:', stdout_log)\n",
" stderr_log = step_run.get_stderr_log()\n",
" print('stderr log:', stderr_log)\n",
" with open(\"logs-\" + step_run.name + \".txt\", \"w\") as f:\n",
" f.write(joblog)\n",
" print(\"Job log written to logs-\"+ step_run.name + \".txt\")\n",
" if status == \"Finished\":\n",
" stdout_log = step_run.get_stdout_log()\n",
" print('stdout log:', stdout_log)"
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,651 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Using Databricks as a Compute Target from Azure Machine Learning Pipeline\n",
"To use Databricks as a compute target from [Azure Machine Learning Pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines), a [DatabricksStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep?view=azure-ml-py) is used. This notebook demonstrates the use of DatabricksStep in Azure Machine Learning Pipeline.\n",
"\n",
"The notebook will show:\n",
"1. Running an arbitrary Databricks notebook that the customer has in Databricks workspace\n",
"2. Running an arbitrary Python script that the customer has in DBFS\n",
"3. Running an arbitrary Python script that is available on local computer (will upload to DBFS, and then run in Databricks) \n",
"4. Running a JAR job that the customer has in DBFS.\n",
"\n",
"## Before you begin:\n",
"\n",
"1. **Create an Azure Databricks workspace** in the same subscription where you have your Azure Machine Learning workspace. You will need details of this workspace later on to define DatabricksStep. [Click here](https://ms.portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.Databricks%2Fworkspaces) for more information.\n",
"2. **Create PAT (access token)**: Manually create a Databricks access token at the Azure Databricks portal. See [this](https://docs.databricks.com/api/latest/authentication.html#generate-a-token) for more information.\n",
"3. **Add demo notebook to ADB**: This notebook has a sample you can use as is. Launch Azure Databricks attached to your Azure Machine Learning workspace and add a new notebook. \n",
"4. **Create/attach a Blob storage** for use from ADB"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add demo notebook to ADB Workspace\n",
"Copy and paste the below code to create a new notebook in your ADB workspace."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"# direct access\n",
"dbutils.widgets.get(\"myparam\")\n",
"p = getArgument(\"myparam\")\n",
"print (\"Param -\\'myparam':\")\n",
"print (p)\n",
"\n",
"dbutils.widgets.get(\"input\")\n",
"i = getArgument(\"input\")\n",
"print (\"Param -\\'input':\")\n",
"print (i)\n",
"\n",
"dbutils.widgets.get(\"output\")\n",
"o = getArgument(\"output\")\n",
"print (\"Param -\\'output':\")\n",
"print (o)\n",
"\n",
"n = i + \"/testdata.txt\"\n",
"df = spark.read.csv(n)\n",
"\n",
"display (df)\n",
"\n",
"data = [('value1', 'value2')]\n",
"df2 = spark.createDataFrame(data)\n",
"\n",
"z = o + \"/output.txt\"\n",
"df2.write.csv(z)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Azure Machine Learning and Pipeline SDK-specific imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import azureml.core\n",
"from azureml.core.runconfig import JarLibrary\n",
"from azureml.core.compute import ComputeTarget, DatabricksCompute\n",
"from azureml.exceptions import ComputeTargetException\n",
"from azureml.core import Workspace, Run, Experiment\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.steps import DatabricksStep\n",
"from azureml.core.datastore import Datastore\n",
"from azureml.data.data_reference import DataReference\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Workspace\n",
"\n",
"Initialize a workspace object from persisted configuration. Make sure the config file is present at .\\config.json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Attach Databricks compute target\n",
"Next, you need to add your Databricks workspace to Azure Machine Learning as a compute target and give it a name. You will use this name to refer to your Databricks workspace compute target inside Azure Machine Learning.\n",
"\n",
"- **Resource Group** - The resource group name of your Azure Machine Learning workspace\n",
"- **Databricks Workspace Name** - The workspace name of your Azure Databricks workspace\n",
"- **Databricks Access Token** - The access token you created in ADB"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Replace with your account info before running.\n",
"\n",
"# db_compute_name = \"<my-databricks_compute_name>\"\n",
"# aml_resource_group = \"<my-aml-resource-group>\"\n",
"# db_workspace_name = \"<my-databricks_workspace_name>\"\n",
"# access_token = \"<my-databricks_access_token>\"\n",
"\n",
"try:\n",
" databricks_compute = ComputeTarget(workspace=ws, name=db_compute_name)\n",
" print('Compute target {} already exists'.format(db_compute_name))\n",
"except ComputeTargetException:\n",
" print('compute not found')\n",
" print('databricks_compute_name {}'.format(db_compute_name))\n",
" print('databricks_resource_id {}'.format(db_workspace_name))\n",
" print('databricks_access_token {}'.format(access_token))\n",
"\n",
" config = DatabricksCompute.attach_configuration(aml_resource_group, db_workspace_name, access_token)\n",
" ComputeTarget.attach(ws, db_compute_name, config)\n",
" databricks_compute.wait_for_completion(True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Connections with Inputs and Outputs\n",
"The DatabricksStep supports Azure Bloband ADLS for inputs and outputs. You also will need to define a [Secrets](https://docs.azuredatabricks.net/user-guide/secrets/index.html) scope to enable authentication to external data sources such as Blob and ADLS from Databricks.\n",
"\n",
"- Databricks documentation on [Azure Blob](https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html)\n",
"- Databricks documentation on [ADLS](https://docs.databricks.com/spark/latest/data-sources/azure/azure-datalake.html)\n",
"\n",
"### Type of Data Access\n",
"Databricks allows to interact with Azure Blob and ADLS in two ways.\n",
"- **Direct Access**: Databricks allows you to interact with Azure Blob or ADLS URIs directly. The input or output URIs will be mapped to a Databricks widget param in the Databricks notebook.\n",
"- **Mouting**: You will be supplied with additional parameters and secrets that will enable you to mount your ADLS or Azure Blob input or output location in your Databricks notebook."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Direct Access: Python sample code\n",
"If you have a data reference named \"input\" it will represent the URI of the input and you can access it directly in the Databricks python notebook like so:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"dbutils.widgets.get(\"input\")\n",
"y = getArgument(\"input\")\n",
"df = spark.read.csv(y)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Mounting: Python sample code for Azure Blob\n",
"Given an Azure Blob data reference named \"input\" the following widget params will be made available in the Databricks notebook:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"# This contains the input URI\n",
"dbutils.widgets.get(\"input\")\n",
"myinput_uri = getArgument(\"input\")\n",
"\n",
"# How to get the input datastore name inside ADB notebook\n",
"# This contains the name of a Databricks secret (in the predefined \"amlscope\" secret scope) \n",
"# that contians an access key or sas for the Azure Blob input (this name is obtained by appending \n",
"# the name of the input with \"_blob_secretname\". \n",
"dbutils.widgets.get(\"input_blob_secretname\") \n",
"myinput_blob_secretname = getArgument(\"input_blob_secretname\")\n",
"\n",
"# This contains the required configuration for mounting\n",
"dbutils.widgets.get(\"input_blob_config\")\n",
"myinput_blob_config = getArgument(\"input_blob_config\")\n",
"\n",
"# Usage\n",
"dbutils.fs.mount(\n",
" source = myinput_uri,\n",
" mount_point = \"/mnt/input\",\n",
" extra_configs = {myinput_blob_config:dbutils.secrets.get(scope = \"amlscope\", key = myinput_blob_secretname)})\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Mounting: Python sample code for ADLS\n",
"Given an ADLS data reference named \"input\" the following widget params will be made available in the Databricks notebook:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"# This contains the input URI\n",
"dbutils.widgets.get(\"input\") \n",
"myinput_uri = getArgument(\"input\")\n",
"\n",
"# This contains the client id for the service principal \n",
"# that has access to the adls input\n",
"dbutils.widgets.get(\"input_adls_clientid\") \n",
"myinput_adls_clientid = getArgument(\"input_adls_clientid\")\n",
"\n",
"# This contains the name of a Databricks secret (in the predefined \"amlscope\" secret scope) \n",
"# that contains the secret for the above mentioned service principal\n",
"dbutils.widgets.get(\"input_adls_secretname\") \n",
"myinput_adls_secretname = getArgument(\"input_adls_secretname\")\n",
"\n",
"# This contains the refresh url for the mounting configs\n",
"dbutils.widgets.get(\"input_adls_refresh_url\") \n",
"myinput_adls_refresh_url = getArgument(\"input_adls_refresh_url\")\n",
"\n",
"# Usage \n",
"configs = {\"dfs.adls.oauth2.access.token.provider.type\": \"ClientCredential\",\n",
" \"dfs.adls.oauth2.client.id\": myinput_adls_clientid,\n",
" \"dfs.adls.oauth2.credential\": dbutils.secrets.get(scope = \"amlscope\", key =myinput_adls_secretname),\n",
" \"dfs.adls.oauth2.refresh.url\": myinput_adls_refresh_url}\n",
"\n",
"dbutils.fs.mount(\n",
" source = myinput_uri,\n",
" mount_point = \"/mnt/output\",\n",
" extra_configs = configs)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use Databricks from Azure Machine Learning Pipeline\n",
"To use Databricks as a compute target from Azure Machine Learning Pipeline, a DatabricksStep is used. Let's define a datasource (via DataReference) and intermediate data (via PipelineData) to be used in DatabricksStep."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use the default blob storage\n",
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
"print('Datastore {} will be used'.format(def_blob_store.name))\n",
"\n",
"# We are uploading a sample file in the local directory to be used as a datasource\n",
"def_blob_store.upload_files([\"./testdata.txt\"], target_path=\"dbtest\", overwrite=False)\n",
"\n",
"step_1_input = DataReference(datastore=def_blob_store, path_on_datastore=\"dbtest\",\n",
" data_reference_name=\"input\")\n",
"\n",
"step_1_output = PipelineData(\"output\", datastore=def_blob_store)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add a DatabricksStep\n",
"Adds a Databricks notebook as a step in a Pipeline.\n",
"- ***name:** Name of the Module\n",
"- **inputs:** List of input connections for data consumed by this step. Fetch this inside the notebook using dbutils.widgets.get(\"input\")\n",
"- **outputs:** List of output port definitions for outputs produced by this step. Fetch this inside the notebook using dbutils.widgets.get(\"output\")\n",
"- **spark_version:** Version of spark for the databricks run cluster. default value: 4.0.x-scala2.11\n",
"- **node_type:** Azure vm node types for the databricks run cluster. default value: Standard_D3_v2\n",
"- **num_workers:** Number of workers for the databricks run cluster\n",
"- **autoscale:** The autoscale configuration for the databricks run cluster\n",
"- **spark_env_variables:** Spark environment variables for the databricks run cluster (dictionary of {str:str}). default value: {'PYSPARK_PYTHON': '/databricks/python3/bin/python3'}\n",
"- ***notebook_path:** Path to the notebook in the databricks instance.\n",
"- **notebook_params:** Parameters for the databricks notebook (dictionary of {str:str}). Fetch this inside the notebook using dbutils.widgets.get(\"myparam\")\n",
"- **run_name:** Name in databricks for this run\n",
"- **timeout_seconds:** Timeout for the databricks run\n",
"- **maven_libraries:** maven libraries for the databricks run\n",
"- **pypi_libraries:** pypi libraries for the databricks run\n",
"- **egg_libraries:** egg libraries for the databricks run\n",
"- **jar_libraries:** jar libraries for the databricks run\n",
"- **rcran_libraries:** rcran libraries for the databricks run\n",
"- **databricks_compute:** Azure Databricks compute\n",
"- **databricks_compute_name:** Name of Azure Databricks compute\n",
"\n",
"\\* *denotes required fields* \n",
"*You must provide exactly one of num_workers or autoscale paramaters* \n",
"*You must provide exactly one of databricks_compute or databricks_compute_name parameters*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id='notebook_howto'></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Running the demo notebook already added to the Databricks workspace\n",
"The commented out code in the below cell assumes that you have created a notebook called `demo_notebook` in Azure Databricks under your user folder so you can use `notebook_path = \"/Users/you@company.com/demo_notebook\"`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# notebook_path = \"/Users/you@company.com/demo_notebook\"\n",
"\n",
"dbNbStep = DatabricksStep(\n",
" name=\"DBNotebookInWS\",\n",
" inputs=[step_1_input],\n",
" outputs=[step_1_output],\n",
" num_workers=1,\n",
" notebook_path=notebook_path,\n",
" notebook_params={'myparam': 'testparam'},\n",
" run_name='DB_Notebook_demo',\n",
" compute_target=databricks_compute,\n",
" allow_reuse=False\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build and submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"steps = [dbNbStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_Notebook_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Running a Python script that is already added in DBFS\n",
"To run a Python script that is already uploaded to DBFS, follow the instructions below. You will first upload the Python script to DBFS using the [CLI](https://docs.azuredatabricks.net/user-guide/dbfs-databricks-file-system.html).\n",
"\n",
"The commented out code in the below cell assumes that you have uploaded `train-db-dbfs.py` to the root folder in DBFS. You can upload `train-db-dbfs.py` to the root folder in DBFS using this commandline so you can use `python_script_path = \"dbfs:/train-db-dbfs.py\"`:\n",
"\n",
"```\n",
"dbfs cp ./train-db-dbfs.py dbfs:/train-db-dbfs.py\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"python_script_path = \"dbfs:/train-db-dbfs.py\"\n",
"\n",
"dbPythonInDbfsStep = DatabricksStep(\n",
" name=\"DBPythonInDBFS\",\n",
" inputs=[step_1_input],\n",
" num_workers=1,\n",
" python_script_path=python_script_path,\n",
" python_script_params={'--input_data'},\n",
" run_name='DB_Python_demo',\n",
" compute_target=databricks_compute,\n",
" allow_reuse=False\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build and submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"steps = [dbPythonInDbfsStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_Python_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Running a Python script in Databricks that currenlty is in local computer\n",
"To run a Python script that is currently in your local computer, follow the instructions below. \n",
"\n",
"The commented out code below code assumes that you have `train-db-local.py` in the `scripts` subdirectory under the current working directory.\n",
"\n",
"In this case, the Python script will be uploaded first to DBFS, and then the script will be run in Databricks."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"python_script_name = \"train-db-local.py\"\n",
"source_directory = \".\"\n",
"\n",
"dbPythonInLocalMachineStep = DatabricksStep(\n",
" name=\"DBPythonInLocalMachine\",\n",
" inputs=[step_1_input],\n",
" num_workers=1,\n",
" python_script_name=python_script_name,\n",
" source_directory=source_directory,\n",
" run_name='DB_Python_Local_demo',\n",
" compute_target=databricks_compute,\n",
" allow_reuse=False\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build and submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"steps = [dbPythonInLocalMachineStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_Python_Local_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4. Running a JAR job that is alreay added in DBFS\n",
"To run a JAR job that is already uploaded to DBFS, follow the instructions below. You will first upload the JAR file to DBFS using the [CLI](https://docs.azuredatabricks.net/user-guide/dbfs-databricks-file-system.html).\n",
"\n",
"The commented out code in the below cell assumes that you have uploaded `train-db-dbfs.jar` to the root folder in DBFS. You can upload `train-db-dbfs.jar` to the root folder in DBFS using this commandline so you can use `jar_library_dbfs_path = \"dbfs:/train-db-dbfs.jar\"`:\n",
"\n",
"```\n",
"dbfs cp ./train-db-dbfs.jar dbfs:/train-db-dbfs.jar\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"main_jar_class_name = \"com.microsoft.aeva.Main\"\n",
"jar_library_dbfs_path = \"dbfs:/train-db-dbfs.jar\"\n",
"\n",
"dbJarInDbfsStep = DatabricksStep(\n",
" name=\"DBJarInDBFS\",\n",
" inputs=[step_1_input],\n",
" num_workers=1,\n",
" main_class_name=main_jar_class_name,\n",
" jar_params={'arg1', 'arg2'},\n",
" run_name='DB_JAR_demo',\n",
" jar_libraries=[JarLibrary(jar_library_dbfs_path)],\n",
" compute_target=databricks_compute,\n",
" allow_reuse=False\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build and submit the Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"steps = [dbJarInDbfsStep]\n",
"pipeline = Pipeline(workspace=ws, steps=steps)\n",
"pipeline_run = Experiment(ws, 'DB_JAR_demo').submit(pipeline)\n",
"pipeline_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### View Run Details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next: ADLA as a Compute Target\n",
"To use ADLA as a compute target from Azure Machine Learning Pipeline, a AdlaStep is used. This [notebook](./aml-pipelines-use-adla-as-compute-target.ipynb) demonstrates the use of AdlaStep in Azure Machine Learning Pipeline."
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,409 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure Machine Learning Pipelines with Data Dependency\n",
"In this notebook, we will see how we can build a pipeline with implicit data dependancy."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites and Azure Machine Learning Basics\n",
"Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. \n",
"\n",
"### Azure Machine Learning and Pipeline SDK-specific Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"from azureml.core import Workspace, Run, Experiment, Datastore\n",
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.compute import DataFactoryCompute\n",
"from azureml.widgets import RunDetails\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)\n",
"\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.core import Pipeline, PipelineData, StepSequence\n",
"from azureml.pipeline.steps import PythonScriptStep\n",
"from azureml.pipeline.steps import DataTransferStep\n",
"from azureml.pipeline.core import PublishedPipeline\n",
"from azureml.pipeline.core.graph import PipelineParameter\n",
"\n",
"print(\"Pipeline SDK-specific imports completed\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize Workspace\n",
"\n",
"Initialize a [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class%29) object from persisted configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"create workspace"
]
},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')\n",
"\n",
"# Default datastore (Azure file storage)\n",
"def_file_store = ws.get_default_datastore() \n",
"print(\"Default datastore's name: {}\".format(def_file_store.name))\n",
"\n",
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
"print(\"Blobstore's name: {}\".format(def_blob_store.name))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# project folder\n",
"project_folder = '.'\n",
" \n",
"print('Sample projects will be created in {}.'.format(project_folder))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Required data and script files for the the tutorial\n",
"Sample files required to finish this tutorial are already copied to the project folder specified above. Even though the .py provided in the samples don't have much \"ML work,\" as a data scientist, you will work on this extensively as part of your work. To complete this tutorial, the contents of these files are not very important. The one-line files are for demostration purpose only."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compute Targets\n",
"See the list of Compute Targets on the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cts = ws.compute_targets\n",
"for ct in cts:\n",
" print(ct)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve or create a Aml compute\n",
"Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Aml Compute in the current workspace, if it doesn't already exist. We will then run the training script on this compute target."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"aml_compute_target = \"aml-compute\"\n",
"try:\n",
" aml_compute = AmlCompute(ws, aml_compute_target)\n",
" print(\"found existing compute target.\")\n",
"except:\n",
" print(\"creating new compute target\")\n",
" \n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n",
" min_nodes = 1, \n",
" max_nodes = 4) \n",
" aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n",
" aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
"print(\"Aml Compute attached\")\n",
"# For a more detailed view of current AmlCompute status, use the 'status' property \n",
"print(aml_compute.status.serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Wait for this call to finish before proceeding (you will see the asterisk turning to a number).**\n",
"\n",
"Now that you have created the compute target, let's see what the workspace's compute_targets() function returns. You should now see one entry named 'amlcompute' of type AmlCompute."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building Pipeline Steps with Inputs and Outputs\n",
"As mentioned earlier, a step in the pipeline can take data as input. This data can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline.\n",
"\n",
"### Datasources\n",
"Datasource is represented by **[DataReference](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.data_reference.datareference?view=azure-ml-py)** object and points to data that lives in or is accessible from Datastore. DataReference could be a pointer to a file or a directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reference the data uploaded to blob storage using DataReference\n",
"# Assign the datasource to blob_input_data variable\n",
"\n",
"# DataReference(datastore, \n",
"# data_reference_name=None, \n",
"# path_on_datastore=None, \n",
"# mode='mount', \n",
"# path_on_compute=None, \n",
"# overwrite=False)\n",
"\n",
"blob_input_data = DataReference(\n",
" datastore=def_blob_store,\n",
" data_reference_name=\"test_data\",\n",
" path_on_datastore=\"20newsgroups/20news.pkl\")\n",
"print(\"DataReference object created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Intermediate/Output Data\n",
"Intermediate data (or output of a Step) is represented by **[PipelineData](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata?view=azure-ml-py)** object. PipelineData can be produced by one step and consumed in another step by providing the PipelineData object as an output of one step and the input of one or more steps.\n",
"\n",
"#### Constructing PipelineData\n",
"- **name:** [*Required*] Name of the data item within the pipeline graph\n",
"- **datastore_name:** Name of the Datastore to write this output to\n",
"- **output_name:** Name of the output\n",
"- **output_mode:** Specifies \"upload\" or \"mount\" modes for producing output (default: mount)\n",
"- **output_path_on_compute:** For \"upload\" mode, the path to which the module writes this output during execution\n",
"- **output_overwrite:** Flag to overwrite pre-existing data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Define intermediate data using PipelineData\n",
"# Syntax\n",
"\n",
"# PipelineData(name, \n",
"# datastore=None, \n",
"# output_name=None, \n",
"# output_mode='mount', \n",
"# output_path_on_compute=None, \n",
"# output_overwrite=None, \n",
"# data_type=None, \n",
"# is_directory=None)\n",
"\n",
"# Naming the intermediate data as processed_data1 and assigning it to the variable processed_data1.\n",
"processed_data1 = PipelineData(\"processed_data1\",datastore=def_blob_store)\n",
"print(\"PipelineData object created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pipelines steps using datasources and intermediate data\n",
"Machine learning pipelines have many steps and these steps could use or reuse datasources and intermediate data. Here's how we construct such a pipeline:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Define a Step that consumes a datasource and produces intermediate data.\n",
"In this step, we define a step that consumes a datasource and produces intermediate data.\n",
"\n",
"**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# step4 consumes the datasource (Datareference) in the previous step\n",
"# and produces processed_data1\n",
"trainStep = PythonScriptStep(\n",
" script_name=\"train.py\", \n",
" arguments=[\"--input_data\", blob_input_data, \"--output_train\", processed_data1],\n",
" inputs=[blob_input_data],\n",
" outputs=[processed_data1],\n",
" compute_target=aml_compute, \n",
" source_directory=project_folder\n",
")\n",
"print(\"trainStep created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Define a Step that consumes intermediate data and produces intermediate data\n",
"In this step, we define a step that consumes an intermediate data and produces intermediate data.\n",
"\n",
"**Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# step5 to use the intermediate data produced by step4\n",
"# This step also produces an output processed_data2\n",
"processed_data2 = PipelineData(\"processed_data2\", datastore=def_blob_store)\n",
"\n",
"extractStep = PythonScriptStep(\n",
" script_name=\"extract.py\",\n",
" arguments=[\"--input_extract\", processed_data1, \"--output_extract\", processed_data2],\n",
" inputs=[processed_data1],\n",
" outputs=[processed_data2],\n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
"print(\"extractStep created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Define a Step that consumes multiple intermediate data and produces intermediate data\n",
"In this step, we define a step that consumes multiple intermediate data and produces intermediate data.\n",
"\n",
"**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Now define step6 that takes two inputs (both intermediate data), and produce an output\n",
"processed_data3 = PipelineData(\"processed_data3\", datastore=def_blob_store)\n",
"\n",
"compareStep = PythonScriptStep(\n",
" script_name=\"compare.py\",\n",
" arguments=[\"--compare_data1\", processed_data1, \"--compare_data2\", processed_data2, \"--output_compare\", processed_data3],\n",
" inputs=[processed_data1, processed_data2],\n",
" outputs=[processed_data3], \n",
" compute_target=aml_compute, \n",
" source_directory=project_folder)\n",
"print(\"compareStep created\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Build the pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline1 = Pipeline(workspace=ws, steps=[compareStep])\n",
"print (\"Pipeline is built\")\n",
"\n",
"pipeline1.validate()\n",
"print(\"Simple validation complete\") "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_run1 = Experiment(ws, 'Data_dependency').submit(pipeline1)\n",
"print(\"Pipeline is submitted for execution\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"RunDetails(pipeline_run1).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next: Publishing the Pipeline and calling it from the REST endpoint\n",
"See this [notebook](./aml-pipelines-publish-and-run-using-rest-endpoint.ipynb) to understand how the pipeline is published and you can call the REST endpoint to run the pipeline."
]
}
],
"metadata": {
"authors": [
{
"name": "diray"
}
],
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,119 +0,0 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
import os
import argparse
import datetime
import time
import tensorflow as tf
from math import ceil
import numpy as np
import shutil
from tensorflow.contrib.slim.python.slim.nets import inception_v3
from azureml.core.model import Model
slim = tf.contrib.slim
parser = argparse.ArgumentParser(description="Start a tensorflow model serving")
parser.add_argument('--model_name', dest="model_name", required=True)
parser.add_argument('--label_dir', dest="label_dir", required=True)
parser.add_argument('--dataset_path', dest="dataset_path", required=True)
parser.add_argument('--output_dir', dest="output_dir", required=True)
parser.add_argument('--batch_size', dest="batch_size", type=int, required=True)
args = parser.parse_args()
image_size = 299
num_channel = 3
# create output directory if it does not exist
os.makedirs(args.output_dir, exist_ok=True)
def get_class_label_dict(label_file):
label = []
proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()
for l in proto_as_ascii_lines:
label.append(l.rstrip())
return label
class DataIterator:
def __init__(self, data_dir):
self.file_paths = []
image_list = os.listdir(data_dir)
# total_size = len(image_list)
self.file_paths = [data_dir + '/' + file_name.rstrip() for file_name in image_list]
self.labels = [1 for file_name in self.file_paths]
@property
def size(self):
return len(self.labels)
def input_pipeline(self, batch_size):
images_tensor = tf.convert_to_tensor(self.file_paths, dtype=tf.string)
labels_tensor = tf.convert_to_tensor(self.labels, dtype=tf.int64)
input_queue = tf.train.slice_input_producer([images_tensor, labels_tensor], shuffle=False)
labels = input_queue[1]
images_content = tf.read_file(input_queue[0])
image_reader = tf.image.decode_jpeg(images_content, channels=num_channel, name="jpeg_reader")
float_caster = tf.cast(image_reader, tf.float32)
new_size = tf.constant([image_size, image_size], dtype=tf.int32)
images = tf.image.resize_images(float_caster, new_size)
images = tf.divide(tf.subtract(images, [0]), [255])
image_batch, label_batch = tf.train.batch([images, labels], batch_size=batch_size, capacity=5 * batch_size)
return image_batch
def main(_):
# start_time = datetime.datetime.now()
label_file_name = os.path.join(args.label_dir, "labels.txt")
label_dict = get_class_label_dict(label_file_name)
classes_num = len(label_dict)
test_feeder = DataIterator(data_dir=args.dataset_path)
total_size = len(test_feeder.labels)
count = 0
# get model from model registry
model_path = Model.get_model_path(args.model_name)
with tf.Session() as sess:
test_images = test_feeder.input_pipeline(batch_size=args.batch_size)
with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
input_images = tf.placeholder(tf.float32, [args.batch_size, image_size, image_size, num_channel])
logits, _ = inception_v3.inception_v3(input_images,
num_classes=classes_num,
is_training=False)
probabilities = tf.argmax(logits, 1)
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
saver = tf.train.Saver()
saver.restore(sess, model_path)
out_filename = os.path.join(args.output_dir, "result-labels.txt")
with open(out_filename, "w") as result_file:
i = 0
while count < total_size and not coord.should_stop():
test_images_batch = sess.run(test_images)
file_names_batch = test_feeder.file_paths[i * args.batch_size:
min(test_feeder.size, (i + 1) * args.batch_size)]
results = sess.run(probabilities, feed_dict={input_images: test_images_batch})
new_add = min(args.batch_size, total_size - count)
count += new_add
i += 1
for j in range(new_add):
result_file.write(os.path.basename(file_names_batch[j]) + ": " + label_dict[results[j]] + "\n")
result_file.flush()
coord.request_stop()
coord.join(threads)
# copy the file to artifacts
shutil.copy(out_filename, "./outputs/")
# Move the processed data out of the blob so that the next run can process the data.
if __name__ == "__main__":
tf.app.run()

View File

@@ -1,22 +0,0 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
import argparse
import os
print("In compare.py")
print("As a data scientist, this is where I use my compare code.")
parser = argparse.ArgumentParser("compare")
parser.add_argument("--compare_data1", type=str, help="compare_data1 data")
parser.add_argument("--compare_data2", type=str, help="compare_data2 data")
parser.add_argument("--output_compare", type=str, help="output_compare directory")
args = parser.parse_args()
print("Argument 1: %s" % args.compare_data1)
print("Argument 2: %s" % args.compare_data2)
print("Argument 3: %s" % args.output_compare)
if not (args.output_compare is None):
os.makedirs(args.output_compare, exist_ok=True)
print("%s created" % args.output_compare)

View File

@@ -1,21 +0,0 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license.
import argparse
import os
print("In extract.py")
print("As a data scientist, this is where I use my extract code.")
parser = argparse.ArgumentParser("extract")
parser.add_argument("--input_extract", type=str, help="input_extract data")
parser.add_argument("--output_extract", type=str, help="output_extract directory")
args = parser.parse_args()
print("Argument 1: %s" % args.input_extract)
print("Argument 2: %s" % args.output_extract)
if not (args.output_extract is None):
os.makedirs(args.output_extract, exist_ok=True)
print("%s created" % args.output_extract)

View File

@@ -1,573 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Using Azure Machine Learning Pipelines for batch prediction\n",
"\n",
"In this notebook we will demonstrate how to run a batch scoring job using Azure Machine Learning pipelines. Our example job will be to take an already-trained image classification model, and run that model on some unlabeled images. The image classification model that we'll use is the __[Inception-V3 model](https://arxiv.org/abs/1512.00567)__ and we'll run this model on unlabeled images from the __[ImageNet](http://image-net.org/)__ dataset. \n",
"\n",
"The outline of this notebook is as follows:\n",
"\n",
"- Register the pretrained inception model into the model registry. \n",
"- Store the dataset images in a blob container.\n",
"- Use the registered model to do batch scoring on the images in the data blob container."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Datastore\n",
"from azureml.core import Experiment\n",
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.datastore import Datastore\n",
"from azureml.core.runconfig import CondaDependencies, RunConfiguration\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.steps import PythonScriptStep"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from azureml.core import Workspace, Run, Experiment\n",
"\n",
"ws = Workspace.from_config()\n",
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up machine learning resources"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up datastores\n",
"First, lets access the datastore that has the model, labels, and images. \n",
"\n",
"### Create a datastore that points to a blob container containing sample images\n",
"\n",
"We have created a public blob container `sampledata` on an account named `pipelinedata`, containing images from the ImageNet evaluation set. In the next step, we create a datastore with the name `images_datastore`, which points to this container. In the call to `register_azure_blob_container` below, setting the `overwrite` flag to `True` overwrites any datastore that was created previously with that name. \n",
"\n",
"This step can be changed to point to your blob container by providing your own `datastore_name`, `container_name`, and `account_name`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"account_name = \"pipelinedata\"\n",
"datastore_name=\"images_datastore\"\n",
"container_name=\"sampledata\"\n",
"\n",
"batchscore_blob = Datastore.register_azure_blob_container(ws, \n",
" datastore_name=datastore_name, \n",
" container_name= container_name, \n",
" account_name=account_name, \n",
" overwrite=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, lets specify the default datastore for the outputs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def_data_store = ws.get_default_datastore()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure data references\n",
"Now you need to add references to the data, as inputs to the appropriate pipeline steps in your pipeline. A data source in a pipeline is represented by a DataReference object. The DataReference object points to data that lives in, or is accessible from, a datastore. We need DataReference objects corresponding to the following: the directory containing the input images, the directory in which the pretrained model is stored, the directory containing the labels, and the output directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_images = DataReference(datastore=batchscore_blob, \n",
" data_reference_name=\"input_images\",\n",
" path_on_datastore=\"batchscoring/images\",\n",
" mode=\"download\"\n",
" )\n",
"model_dir = DataReference(datastore=batchscore_blob, \n",
" data_reference_name=\"input_model\",\n",
" path_on_datastore=\"batchscoring/models\",\n",
" mode=\"download\" \n",
" )\n",
"label_dir = DataReference(datastore=batchscore_blob, \n",
" data_reference_name=\"input_labels\",\n",
" path_on_datastore=\"batchscoring/labels\",\n",
" mode=\"download\" \n",
" )\n",
"output_dir = PipelineData(name=\"scores\", \n",
" datastore=def_data_store, \n",
" output_path_on_compute=\"batchscoring/results\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create and attach Compute targets\n",
"Use the below code to create and attach Compute targets. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# choose a name for your cluster\n",
"aml_compute_name = os.environ.get(\"AML_COMPUTE_NAME\", \"gpu-cluster\")\n",
"cluster_min_nodes = os.environ.get(\"AML_COMPUTE_MIN_NODES\", 0)\n",
"cluster_max_nodes = os.environ.get(\"AML_COMPUTE_MAX_NODES\", 1)\n",
"vm_size = os.environ.get(\"AML_COMPUTE_SKU\", \"STANDARD_NC6\")\n",
"\n",
"\n",
"if aml_compute_name in ws.compute_targets:\n",
" compute_target = ws.compute_targets[aml_compute_name]\n",
" if compute_target and type(compute_target) is AmlCompute:\n",
" print('found compute target. just use it. ' + aml_compute_name)\n",
"else:\n",
" print('creating a new compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = vm_size, # NC6 is GPU-enabled\n",
" vm_priority = 'lowpriority', # optional\n",
" min_nodes = cluster_min_nodes, \n",
" max_nodes = cluster_max_nodes)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, aml_compute_name, provisioning_config)\n",
" \n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
" # For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property \n",
" print(compute_target.status.serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare the Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download the Model\n",
"\n",
"Download and extract the model from http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz to `\"models\"`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# create directory for model\n",
"model_dir = 'models'\n",
"if not os.path.isdir(model_dir):\n",
" os.mkdir(model_dir)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tarfile\n",
"import urllib.request\n",
"\n",
"url=\"http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz\"\n",
"response = urllib.request.urlretrieve(url, \"model.tar.gz\")\n",
"tar = tarfile.open(\"model.tar.gz\", \"r:gz\")\n",
"tar.extractall(model_dir)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register the model with Workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import shutil\n",
"from azureml.core.model import Model\n",
"\n",
"# register downloaded model \n",
"model = Model.register(model_path = \"models/inception_v3.ckpt\",\n",
" model_name = \"inception\", # this is the name the model is registered as\n",
" tags = {'pretrained': \"inception\"},\n",
" description = \"Imagenet trained tensorflow inception\",\n",
" workspace = ws)\n",
"# remove the downloaded dir after registration if you wish\n",
"shutil.rmtree(\"models\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Write your scoring script"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To do the scoring, we use a batch scoring script `batch_scoring.py`, which is located in the same directory that this notebook is in. You can take a look at this script to see how you might modify it for your custom batch scoring task.\n",
"\n",
"The python script `batch_scoring.py` takes input images, applies the image classification model to these images, and outputs a classification result to a results file.\n",
"\n",
"The script `batch_scoring.py` takes the following parameters:\n",
"\n",
"- `--model_name`: the name of the model being used, which is expected to be in the `model_dir` directory\n",
"- `--label_dir` : the directory holding the `labels.txt` file \n",
"- `--dataset_path`: the directory containing the input images\n",
"- `--output_dir` : the script will run the model on the data and output a `results-label.txt` to this directory\n",
"- `--batch_size` : the batch size used in running the model.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build and run the batch scoring pipeline\n",
"You have everything you need to build the pipeline. Lets put all these together."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Specify the environment to run the script\n",
"Specify the conda dependencies for your script. You will need this object when you create the pipeline step later on."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.runconfig import DEFAULT_GPU_IMAGE\n",
"\n",
"cd = CondaDependencies.create(pip_packages=[\"tensorflow-gpu==1.10.0\", \"azureml-defaults\"])\n",
"\n",
"# Runconfig\n",
"amlcompute_run_config = RunConfiguration(conda_dependencies=cd)\n",
"amlcompute_run_config.environment.docker.enabled = True\n",
"amlcompute_run_config.environment.docker.gpu_support = True\n",
"amlcompute_run_config.environment.docker.base_image = DEFAULT_GPU_IMAGE\n",
"amlcompute_run_config.environment.spark.precache_packages = False"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Specify the parameters for your pipeline\n",
"A subset of the parameters to the python script can be given as input when we re-run a `PublishedPipeline`. In the current example, we define `batch_size` taken by the script as such parameter."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core.graph import PipelineParameter\n",
"batch_size_param = PipelineParameter(name=\"param_batch_size\", default_value=20)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create the pipeline step\n",
"Create the pipeline step using the script, environment configuration, and parameters. Specify the compute target you already attached to your workspace as the target of execution of the script. We will use PythonScriptStep to create the pipeline step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inception_model_name = \"inception_v3.ckpt\"\n",
"\n",
"batch_score_step = PythonScriptStep(\n",
" name=\"batch_scoring\",\n",
" script_name=\"batch_scoring.py\",\n",
" arguments=[\"--dataset_path\", input_images, \n",
" \"--model_name\", \"inception\",\n",
" \"--label_dir\", label_dir, \n",
" \"--output_dir\", output_dir, \n",
" \"--batch_size\", batch_size_param],\n",
" compute_target=compute_target,\n",
" inputs=[input_images, label_dir],\n",
" outputs=[output_dir],\n",
" runconfig=amlcompute_run_config\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the pipeline\n",
"At this point you can run the pipeline and examine the output it produced. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline = Pipeline(workspace=ws, steps=[batch_score_step])\n",
"pipeline_run = Experiment(ws, 'batch_scoring').submit(pipeline, pipeline_params={\"param_batch_size\": 20})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Monitor the run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_run.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download and review output"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"step_run = list(pipeline_run.get_children())[0]\n",
"step_run.download_file(\"./outputs/result-labels.txt\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"df = pd.read_csv(\"result-labels.txt\", delimiter=\":\", header=None)\n",
"df.columns = [\"Filename\", \"Prediction\"]\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Publish a pipeline and rerun using a REST call"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a published pipeline\n",
"Once you are satisfied with the outcome of the run, you can publish the pipeline to run it with different input values later. When you publish a pipeline, you will get a REST endpoint that accepts invoking of the pipeline with the set of parameters you have already incorporated above using PipelineParameter."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"published_pipeline = pipeline_run.publish_pipeline(\n",
" name=\"Inception_v3_scoring\", description=\"Batch scoring using Inception v3 model\", version=\"1.0\")\n",
"\n",
"published_id = published_pipeline.id"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Rerun the pipeline using the REST endpoint"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get AAD token"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.authentication import AzureCliAuthentication\n",
"import requests\n",
"\n",
"cli_auth = AzureCliAuthentication()\n",
"aad_token = cli_auth.get_authentication_header()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run published pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core import PublishedPipeline\n",
"\n",
"rest_endpoint = published_pipeline.endpoint\n",
"# specify batch size when running the pipeline\n",
"response = requests.post(rest_endpoint, \n",
" headers=aad_token, \n",
" json={\"ExperimentName\": \"batch_scoring\",\n",
" \"ParameterAssignments\": {\"param_batch_size\": 50}})\n",
"run_id = response.json()[\"Id\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Monitor the new run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.pipeline.core.run import PipelineRun\n",
"published_pipeline_run = PipelineRun(ws.experiments[\"batch_scoring\"], run_id)\n",
"\n",
"RunDetails(published_pipeline_run).show()"
]
}
],
"metadata": {
"authors": [
{
"name": "hichando"
}
],
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Some files were not shown because too many files have changed in this diff Show More