{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Copyright (c) Microsoft Corporation. All rights reserved.\n", "\n", "Licensed under the MIT License." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Automated ML on Azure Databricks\n", "\n", "In this example we use the scikit-learn's digit dataset to showcase how you can use AutoML for a simple classification problem.\n", "\n", "In this notebook you will learn how to:\n", "1. Create Azure Machine Learning Workspace object and initialize your notebook directory to easily reload this object from a configuration file.\n", "2. Create an `Experiment` in an existing `Workspace`.\n", "3. Configure Automated ML using `AutoMLConfig`.\n", "4. Train the model using Azure Databricks.\n", "5. Explore the results.\n", "6. Test the best fitted model.\n", "\n", "Before running this notebook, please follow the readme for using Automated ML on Azure Databricks for installing necessary libraries to your cluster." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We support installing AML SDK with Automated ML as library from GUI. When attaching a library follow this link and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.\n", "\n", "**azureml-sdk with automated ml**\n", "* Source: Upload Python Egg or PyPi\n", "* PyPi Name: `azureml-sdk[automl_databricks]`\n", "* Select Install Library" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check the Azure ML Core SDK Version to Validate Your Installation" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import azureml.core\n", "\n", "print(\"SDK Version:\", azureml.core.VERSION)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Initialize an Azure ML Workspace\n", "### What is an Azure ML Workspace and Why Do I Need One?\n", "\n", "An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.\n", "\n", "\n", "### What do I Need?\n", "\n", "To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:\n", "* A name for your workspace. You can choose one.\n", "* Your subscription id. Use the `id` value from the `az account show` command output above.\n", "* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)\n", "* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "subscription_id = \"\" #you should be owner or contributor\n", "resource_group = \"\" #you should be owner or contributor\n", "workspace_name = \"\" #your workspace name\n", "workspace_region = \"\" #your region" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating a Workspace\n", "If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.\n", "\n", "This will fail when:\n", "1. The workspace already exists.\n", "2. You do not have permission to create a workspace in the resource group.\n", "3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.\n", "\n", "If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.\n", "\n", "**Note:** Creation of a new workspace can take several minutes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import the Workspace class and check the Azure ML SDK version.\n", "from azureml.core import Workspace\n", "\n", "ws = Workspace.create(name = workspace_name,\n", " subscription_id = subscription_id,\n", " resource_group = resource_group, \n", " location = workspace_region, \n", " exist_ok=True)\n", "ws.get_details()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Configuring Your Local Environment\n", "You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azureml.core import Workspace\n", "\n", "ws = Workspace(workspace_name = workspace_name,\n", " subscription_id = subscription_id,\n", " resource_group = resource_group)\n", "\n", "# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.\n", "ws.write_config()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a Folder to Host Sample Projects\n", "Finally, create a folder where all the sample projects will be hosted." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "sample_projects_folder = './sample_projects'\n", "\n", "if not os.path.isdir(sample_projects_folder):\n", " os.mkdir(sample_projects_folder)\n", " \n", "print('Sample projects will be created in {}.'.format(sample_projects_folder))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create an Experiment\n", "\n", "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import logging\n", "import os\n", "import random\n", "import time\n", "\n", "from matplotlib import pyplot as plt\n", "from matplotlib.pyplot import imshow\n", "import numpy as np\n", "import pandas as pd\n", "\n", "import azureml.core\n", "from azureml.core.experiment import Experiment\n", "from azureml.core.workspace import Workspace\n", "from azureml.train.automl import AutoMLConfig\n", "from azureml.train.automl.run import AutoMLRun" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Choose a name for the experiment and specify the project folder.\n", "experiment_name = 'automl-local-classification'\n", "project_folder = './sample_projects/automl-local-classification'\n", "\n", "experiment = Experiment(ws, experiment_name)\n", "\n", "output = {}\n", "output['SDK version'] = azureml.core.VERSION\n", "output['Subscription ID'] = ws.subscription_id\n", "output['Workspace Name'] = ws.name\n", "output['Resource Group'] = ws.resource_group\n", "output['Location'] = ws.location\n", "output['Project Directory'] = project_folder\n", "output['Experiment Name'] = experiment.name\n", "pd.set_option('display.max_colwidth', -1)\n", "pd.DataFrame(data = output, index = ['']).T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Diagnostics\n", "\n", "Opt-in diagnostics for better experience, quality, and security of future releases." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azureml.telemetry import set_diagnostics_collection\n", "set_diagnostics_collection(send_diagnostics = True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Registering Datastore" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Datastore is the way to save connection information to a storage service (e.g. Azure Blob, Azure Data Lake, Azure SQL) information to your workspace so you can access them without exposing credentials in your code. The first thing you will need to do is register a datastore, you can refer to our [python SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) on how to register datastores. __Note: for best security practices, please do not check in code that contains registering datastores with secrets into your source control__\n", "\n", "The code below registers a datastore pointing to a publicly readable blob container." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azureml.core import Datastore\n", "\n", "datastore_name = 'demo_training'\n", "Datastore.register_azure_blob_container(\n", " workspace = ws, \n", " datastore_name = datastore_name, \n", " container_name = 'automl-notebook-data', \n", " account_name = 'dprepdata'\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below is an example on how to register a private blob container\n", "```python\n", "datastore = Datastore.register_azure_blob_container(\n", " workspace = ws, \n", " datastore_name = 'example_datastore', \n", " container_name = 'example-container', \n", " account_name = 'storageaccount',\n", " account_key = 'accountkey'\n", ")\n", "```\n", "The example below shows how to register an Azure Data Lake store. Please make sure you have granted the necessary permissions for the service principal to access the data lake.\n", "```python\n", "datastore = Datastore.register_azure_data_lake(\n", " workspace = ws,\n", " datastore_name = 'example_datastore',\n", " store_name = 'adlsstore',\n", " tenant_id = 'tenant-id-of-service-principal',\n", " client_id = 'client-id-of-service-principal',\n", " client_secret = 'client-secret-of-service-principal'\n", ")\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Training Data Using DataPrep" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Automated ML takes a Dataflow as input.\n", "\n", "If you are familiar with Pandas and have done your data preparation work in Pandas already, you can use the `read_pandas_dataframe` method in dprep to convert the DataFrame to a Dataflow.\n", "```python\n", "df = pd.read_csv(...)\n", "# apply some transforms\n", "dprep.read_pandas_dataframe(df, temp_folder='/path/accessible/by/both/driver/and/worker')\n", "```\n", "\n", "If you just need to ingest data without doing any preparation, you can directly use AzureML Data Prep (Data Prep) to do so. The code below demonstrates this scenario. Data Prep also has data preparation capabilities, we have many [sample notebooks](https://github.com/Microsoft/AMLDataPrepDocs) demonstrating the capabilities.\n", "\n", "You will get the datastore you registered previously and pass it to Data Prep for reading. The data comes from the digits dataset: `sklearn.datasets.load_digits()`. `DataPath` points to a specific location within a datastore. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import azureml.dataprep as dprep\n", "from azureml.data.datapath import DataPath\n", "\n", "datastore = Datastore.get(workspace = ws, name = datastore_name)\n", "\n", "X_train = dprep.read_csv(DataPath(datastore = datastore, path_on_datastore = 'X.csv')) \n", "y_train = dprep.read_csv(DataPath(datastore = datastore, path_on_datastore = 'y.csv')).to_long(dprep.ColumnSelector(term='.*', use_regex = True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Review the Data Preparation Result\n", "You can peek the result of a Dataflow at any range using `skip(i)` and `head(j)`. Doing so evaluates only j records for all the steps in the Dataflow, which makes it fast even against large datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train.get_profile()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_train.get_profile()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Configure AutoML\n", "\n", "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", "\n", "|Property|Description|\n", "|-|-|\n", "|**task**|classification or regression|\n", "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics:
accuracy
AUC_weighted
average_precision_score_weighted
norm_macro_recall
precision_score_weighted|\n", "|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics:
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error|\n", "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", "|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n", "|**n_cross_validations**|Number of cross validation splits.|\n", "|**spark_context**|Spark Context object. for Databricks, use spark_context=sc|\n", "|**max_concurrent_iterations**|Maximum number of iterations to execute in parallel. This should be <= number of worker nodes in your Azure Databricks cluster.|\n", "|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n", "|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]
Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|\n", "|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n", "|**preprocess**|set this to True to enable pre-processing of data eg. string to numeric using one-hot encoding|\n", "|**exit_score**|Target score for experiment. It is associated with the metric. eg. exit_score=0.995 will exit experiment after that|" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "automl_config = AutoMLConfig(task = 'classification',\n", " debug_log = 'automl_errors.log',\n", " primary_metric = 'AUC_weighted',\n", " iteration_timeout_minutes = 10,\n", " iterations = 5,\n", " preprocess = True,\n", " n_cross_validations = 10,\n", " max_concurrent_iterations = 2, #change it based on number of worker nodes\n", " verbosity = logging.INFO,\n", " spark_context=sc, #databricks/spark related\n", " X = X_train, \n", " y = y_train,\n", " path = project_folder)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Train the Models\n", "\n", "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "local_run = experiment.submit(automl_config, show_output = False) # for higher runs please use show_output=False and use the below" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Explore the Results" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Portal URL for Monitoring Runs\n", "\n", "The following will provide a link to the web interface to explore individual run details and status. In the future we might support output displayed in the notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "displayHTML(\"Your experiment in Azure Portal: {}\".format(local_run.get_portal_url(), local_run.id))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following will show the child runs and waits for the parent run to complete." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Retrieve All Child Runs after the experiment is completed (in portal)\n", "You can also use SDK methods to fetch all the child runs and see individual metrics that we log." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "children = list(local_run.get_children())\n", "metricslist = {}\n", "for run in children:\n", " properties = run.get_properties()\n", " metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n", " metricslist[int(properties['iteration'])] = metrics\n", "\n", "rundata = pd.DataFrame(metricslist).sort_index(1)\n", "rundata" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Retrieve the Best Model after the above run is complete \n", "\n", "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "best_run, fitted_model = local_run.get_output()\n", "print(best_run)\n", "print(fitted_model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Best Model Based on Any Other Metric after the above run is complete based on the child run\n", "Show the run and the model that has the smallest `log_loss` value:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lookup_metric = \"log_loss\"\n", "best_run, fitted_model = local_run.get_output(metric = lookup_metric)\n", "print(best_run)\n", "print(fitted_model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Test the Best Fitted Model\n", "\n", "#### Load Test Data - you can split the dataset beforehand & pass Train dataset to AutoML and use Test dataset to evaluate the best model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import datasets\n", "digits = datasets.load_digits()\n", "X_test = digits.data[:10, :]\n", "y_test = digits.target[:10]\n", "images = digits.images[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Testing Our Best Fitted Model\n", "We will try to predict digits and see how our model works. This is just an example to show you." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Randomly select digits and test.\n", "for index in np.random.choice(len(y_test), 2, replace = False):\n", " print(index)\n", " predicted = fitted_model.predict(X_test[index:index + 1])[0]\n", " label = y_test[index]\n", " title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n", " fig = plt.figure(1, figsize = (3,3))\n", " ax1 = fig.add_axes((0,0,.8,.8))\n", " ax1.set_title(title)\n", " plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n", " display(fig)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When deploying an automated ML trained model, please specify _pippackages=['azureml-sdk[automl]']_ in your CondaDependencies.\n", "\n", "Please refer to only the **Deploy** section in this notebook - Deployment of Automated ML trained model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "authors": [ { "name": "savitam" }, { "name": "wamartin" } ], "kernelspec": { "display_name": "Python 3.6", "language": "python", "name": "python36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" }, "name": "auto-ml-classification-local-adb", "notebookId": 587284549713154 }, "nbformat": 4, "nbformat_minor": 1 }