mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-19 17:17:04 -05:00
569 lines
18 KiB
Plaintext
569 lines
18 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
"\n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# AutoML 08b: Remote Execution with DataPrep\n",
|
|
"\n",
|
|
"This sample accesses a data file on a remote DSVM through Datastore using DataPrep. Advantages of using DataPrep are:\n",
|
|
"1. DataPrep supports reading from and writing to datastores.\n",
|
|
"2. DataPrep supports automatic file type and column type detection.\n",
|
|
"3. DataPrep makes passing data into AutoML really simple.\n",
|
|
"\n",
|
|
"More DataPrep documentation and examples can be found [here](https://github.com/Microsoft/AMLDataPrepDocs).\n",
|
|
"\n",
|
|
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
|
|
"\n",
|
|
"In this notebook you would see\n",
|
|
"1. Storing data in DataStore.\n",
|
|
"2. Doing some basic data preparation using DataPrep and passing the prepared data (DataFlow) to AutoML for training (classficiation).\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Create Experiment\n",
|
|
"\n",
|
|
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import logging\n",
|
|
"import os\n",
|
|
"import random\n",
|
|
"import time\n",
|
|
"\n",
|
|
"from matplotlib import pyplot as plt\n",
|
|
"from matplotlib.pyplot import imshow\n",
|
|
"import numpy as np\n",
|
|
"import pandas as pd\n",
|
|
"from sklearn import datasets\n",
|
|
"\n",
|
|
"import azureml.core\n",
|
|
"from azureml.core.compute import DsvmCompute\n",
|
|
"from azureml.core.experiment import Experiment\n",
|
|
"from azureml.core.workspace import Workspace\n",
|
|
"from azureml.train.automl import AutoMLConfig\n",
|
|
"from azureml.train.automl.run import AutoMLRun"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"ws = Workspace.from_config()\n",
|
|
"\n",
|
|
"# choose a name for experiment\n",
|
|
"experiment_name = 'automl-remote-datastore-file'\n",
|
|
"# project folder\n",
|
|
"project_folder = './sample_projects/automl-remote-dsvm-file'\n",
|
|
"\n",
|
|
"experiment=Experiment(ws, experiment_name)\n",
|
|
"\n",
|
|
"output = {}\n",
|
|
"output['SDK version'] = azureml.core.VERSION\n",
|
|
"output['Subscription ID'] = ws.subscription_id\n",
|
|
"output['Workspace'] = ws.name\n",
|
|
"output['Resource Group'] = ws.resource_group\n",
|
|
"output['Location'] = ws.location\n",
|
|
"output['Project Directory'] = project_folder\n",
|
|
"output['Experiment Name'] = experiment.name\n",
|
|
"pd.set_option('display.max_colwidth', -1)\n",
|
|
"pd.DataFrame(data=output, index=['']).T"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Diagnostics\n",
|
|
"\n",
|
|
"Opt-in diagnostics for better experience, quality, and security of future releases"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.telemetry import set_diagnostics_collection\n",
|
|
"set_diagnostics_collection(send_diagnostics=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Create a Remote Linux DSVM\n",
|
|
"Note: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
|
|
"\n",
|
|
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you can switch to a different port (such as 5022), you can append the port number to the address. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on this."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"compute_target_name = 'automl-dataprep'\n",
|
|
"\n",
|
|
"try:\n",
|
|
" while ws.compute_targets[compute_target_name].provisioning_state == 'Creating':\n",
|
|
" time.sleep(1)\n",
|
|
" \n",
|
|
" dsvm_compute = DsvmCompute(workspace=ws, name=compute_target_name)\n",
|
|
" print('found existing:', dsvm_compute.name)\n",
|
|
"except:\n",
|
|
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size=\"Standard_D2_v2\")\n",
|
|
" dsvm_compute = DsvmCompute.create(ws, name=compute_target_name, provisioning_configuration=dsvm_config)\n",
|
|
" dsvm_compute.wait_for_completion(show_output=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Copy data file to local\n",
|
|
"\n",
|
|
"We will download a 1MB simple random sample of the Chicago Crime data into a local temporary directory."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import tempfile\n",
|
|
"import requests\n",
|
|
"\n",
|
|
"temp_folder = tempfile.mkdtemp()\n",
|
|
"temp_tsv = os.path.join(temp_folder, 'crime0.csv')\n",
|
|
"\n",
|
|
"request = requests.get('https://dprepdata.blob.core.windows.net/demo/crime0-random.csv')\n",
|
|
"with open(temp_tsv, 'w', encoding='utf-8') as f:\n",
|
|
" f.write(request.text)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Upload data to the cloud"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now let's make the data available in your datastore. Datastore is a convenient construct associated with your workspace for you to reference different types of cloud storage locations (e.g. Azure Blob Containers, Azure File Shares, Azure Data Lake Stores, etc.). The benefit Datastore brings is you only need to register datastores once and you will be able to access them by name and will not need to expose secrets in your code. When you first create a workspace, a default datastore is registered for you which references the Azure Blob Container that was provisioned with the workspace. Let's upload the data we just got from the public location to the default datastore.\n",
|
|
"\n",
|
|
"The `csv` file is uploaded into a directory named `datasets` at the root of the datastore."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Workspace, Datastore\n",
|
|
"\n",
|
|
"ds = ws.get_default_datastore()\n",
|
|
"print(ds.datastore_type, ds.account_name, ds.container_name)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"ds.upload(src_dir=temp_folder, target_path='datasets', overwrite=True, show_progress=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Create Dataflow using DataPrep\n",
|
|
"Let's use DataPrep to read the `csv` file from the datastore we just uploaded to and get the data profile to make sure our data looks good. We will predict the type of the offense (`Primary Type`)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import azureml.dataprep as dprep\n",
|
|
"\n",
|
|
"dflow = dprep.read_csv(path=ds.path('datasets/crime0.csv'))\n",
|
|
"dflow.get_profile()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's also take a look at the first 5 rows of the data to give ourselves an idea of what the data looks like."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"dflow.head(5)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"From the first 5 rows, we see that there are some rows that have no value in the label column (`Primary Type`). Let's remove those rows."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"dflow = dflow.drop_nulls('Primary Type')\n",
|
|
"dflow.head(5)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now that we've removed those rows, let's split the dataflow into a features dataflow and a label dataflow."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"X = dflow.drop_columns(columns=['Primary Type', 'FBI Code'])\n",
|
|
"y = dflow.keep_columns(columns=['Primary Type'])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Instantiate AutoML <a class=\"anchor\" id=\"Instatiate-AutoML-Remote-DSVM\"></a>\n",
|
|
"\n",
|
|
"You can specify automl_settings as **kwargs** as well. Also note that you can use the get_data() symantic for local excutions too. \n",
|
|
"\n",
|
|
"<i>Note: For Remote DSVM and Batch AI you cannot pass Numpy arrays directly to AutoMLConfig.</i>\n",
|
|
"\n",
|
|
"|Property|Description|\n",
|
|
"|-|-|\n",
|
|
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration|\n",
|
|
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
|
|
"|**n_cross_validations**|Number of cross validation splits|\n",
|
|
"|**max_concurrent_iterations**|Max number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM\n",
|
|
"|**preprocess**| *True/False* <br>Setting this to *True* enables Auto ML to perform preprocessing <br>on the input to handle *missing data*, and perform some common *feature extraction*|\n",
|
|
"|**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|\n",
|
|
"|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> Default is *1*, you can set it to *-1* to use all cores|"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.runconfig import RunConfiguration\n",
|
|
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
|
"\n",
|
|
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
|
"\n",
|
|
"conda_run_config.target = dsvm_compute\n",
|
|
"\n",
|
|
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]==0.1.0.1918169'], conda_packages=['numpy'], pin_sdk_version=False, pip_indexurl='https://azuremlsdktestpypi.azureedge.net/sdk-release/master/588E708E0DF342C4A80BD954289657CF')\n",
|
|
"conda_run_config.environment.python.conda_dependencies = cd"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"automl_settings = {\n",
|
|
" \"iteration_timeout_minutes\": 60,\n",
|
|
" \"iterations\": 4,\n",
|
|
" \"n_cross_validations\": 5,\n",
|
|
" \"primary_metric\": 'accuracy',\n",
|
|
" \"preprocess\": True,\n",
|
|
" \"max_cores_per_iteration\": 1,\n",
|
|
" \"verbosity\": logging.INFO\n",
|
|
"}\n",
|
|
"automl_config = AutoMLConfig(task='classification',\n",
|
|
" debug_log='automl_errors.log',\n",
|
|
" path=project_folder,\n",
|
|
" run_configuration=conda_run_config,\n",
|
|
" X=X,\n",
|
|
" y=y,\n",
|
|
" **automl_settings)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Training the Models <a class=\"anchor\" id=\"Training-the-model-Remote-DSVM\"></a>\n",
|
|
"\n",
|
|
"For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets/models even when the experiment is running to retreive the best model up to that point. Once you are satisfied with the model you can cancel a particular iteration or the whole run."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"remote_run = experiment.submit(automl_config, show_output=False)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Exploring the Results <a class=\"anchor\" id=\"Exploring-the-Results-Remote-DSVM\"></a>\n",
|
|
"#### Widget for monitoring runs\n",
|
|
"\n",
|
|
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
|
|
"\n",
|
|
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under /tmp/azureml_run/{iterationid}/azureml-logs\n",
|
|
"\n",
|
|
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.widgets import RunDetails\n",
|
|
"RunDetails(remote_run).show() "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Wait until the run finishes.\n",
|
|
"remote_run.wait_for_completion(show_output = True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"remote_run"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"\n",
|
|
"#### Retrieve All Child Runs\n",
|
|
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"children = list(remote_run.get_children())\n",
|
|
"metricslist = {}\n",
|
|
"for run in children:\n",
|
|
" properties = run.get_properties()\n",
|
|
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
|
|
" metricslist[int(properties['iteration'])] = metrics\n",
|
|
"\n",
|
|
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
|
"rundata"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Canceling Runs\n",
|
|
"You can cancel ongoing remote runs using the *cancel()* and *cancel_iteration()* functions"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Cancel the ongoing experiment and stop scheduling new iterations\n",
|
|
"# remote_run.cancel()\n",
|
|
"\n",
|
|
"# Cancel iteration 1 and move onto iteration 2\n",
|
|
"# remote_run.cancel_iteration(1)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Pre-process cache cleanup\n",
|
|
"The preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"remote_run.clean_preprocessor_cache()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Retrieve the Best Model\n",
|
|
"\n",
|
|
"Below we select the best pipeline from our iterations. The *get_output* method returns the best run and the fitted model. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"best_run, fitted_model = remote_run.get_output()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Best Model based on any other metric"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# lookup_metric = \"accuracy\"\n",
|
|
"# best_run, fitted_model = remote_run.get_output(metric=lookup_metric)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Model from a specific iteration"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# iteration = 1\n",
|
|
"# best_run, fitted_model = remote_run.get_output(iteration=iteration)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Testing the Best Fitted Model <a class=\"anchor\" id=\"Testing-the-Fitted-Model-Remote-DSVM\"></a>\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"dflow = dprep.read_csv(path='https://dprepdata.blob.core.windows.net/demo/crime0-test.csv')\n",
|
|
"dflow.head(5)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from pandas_ml import ConfusionMatrix\n",
|
|
"\n",
|
|
"y_test = dflow.keep_columns(columns=['Primary Type']).to_pandas_dataframe()\n",
|
|
"X_test = dflow.drop_columns(columns=['Primary Type', 'FBI Code']).to_pandas_dataframe()\n",
|
|
"\n",
|
|
"ypred = fitted_model.predict(X_test.values)\n",
|
|
"\n",
|
|
"cm = ConfusionMatrix(y_test['Primary Type'], ypred)\n",
|
|
"\n",
|
|
"print(cm)\n",
|
|
"\n",
|
|
"cm.plot()"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "savitam"
|
|
}
|
|
],
|
|
"kernelspec": {
|
|
"display_name": "Python [conda env:cli_dev]",
|
|
"language": "python",
|
|
"name": "conda-env-cli_dev-py"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|