Compare commits
74 Commits
80466
...
ak/revert-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
60de701207 | ||
|
|
5841fa4a42 | ||
|
|
659fb7abc3 | ||
|
|
2e404cfc3a | ||
|
|
5fcf4887bc | ||
|
|
1e7f3117ae | ||
|
|
bbb3f85da9 | ||
|
|
c816dfb479 | ||
|
|
8c128640b1 | ||
|
|
4d2b937846 | ||
|
|
5492f52faf | ||
|
|
735db9ebe7 | ||
|
|
573030b990 | ||
|
|
392a059000 | ||
|
|
3580e54fbb | ||
|
|
2017bcd716 | ||
|
|
4a3f8e7025 | ||
|
|
45880114db | ||
|
|
314bad72a4 | ||
|
|
f252308005 | ||
|
|
6622a6c5f2 | ||
|
|
6b19e2f263 | ||
|
|
42fd4598cb | ||
|
|
476d945439 | ||
|
|
e96bb9bef2 | ||
|
|
2be4a5e54d | ||
|
|
247a25f280 | ||
|
|
5d9d8eade6 | ||
|
|
dba978e42a | ||
|
|
7f4101c33e | ||
|
|
62b0d5df69 | ||
|
|
f10b55a1bc | ||
|
|
da9e86635e | ||
|
|
9ca6388996 | ||
|
|
3ce779063b | ||
|
|
ce635ce4fe | ||
|
|
f08e68c8e9 | ||
|
|
93a1d232db | ||
|
|
923483528c | ||
|
|
cbeacb2ab2 | ||
|
|
c928c50707 | ||
|
|
efb42bacf9 | ||
|
|
d8f349a1ae | ||
|
|
96a61fdc78 | ||
|
|
ff8128f023 | ||
|
|
8260302a68 | ||
|
|
fbd7f4a55b | ||
|
|
d4e4206179 | ||
|
|
a98b918feb | ||
|
|
890490ec70 | ||
|
|
c068c9b979 | ||
|
|
f334a3516f | ||
|
|
96248d8dff | ||
|
|
c42e865700 | ||
|
|
9233ce089a | ||
|
|
6bb1e2a3e3 | ||
|
|
e1724c8a89 | ||
|
|
446e0768cc | ||
|
|
8a2f114a16 | ||
|
|
80c0d4d30f | ||
|
|
e8f4708a5a | ||
|
|
fbaeb84204 | ||
|
|
da1fab0a77 | ||
|
|
94d2890bb5 | ||
|
|
4d1ec4f7d4 | ||
|
|
ace3153831 | ||
|
|
58bbfe57b2 | ||
|
|
11ea00b1d9 | ||
|
|
b81efca3e5 | ||
|
|
d7ceb9bca2 | ||
|
|
17730dc69a | ||
|
|
3a029d48a2 | ||
|
|
06d43956f3 | ||
|
|
a1cb9b33a5 |
30
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,30 +0,0 @@
|
|||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Create a report to help us improve
|
|
||||||
title: "[Notebook issue]"
|
|
||||||
labels: ''
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Describe the bug**
|
|
||||||
A clear and concise description of what the bug is.
|
|
||||||
|
|
||||||
Provide the following if applicable:
|
|
||||||
+ Your Python & SDK version
|
|
||||||
+ Python Scripts or the full notebook name
|
|
||||||
+ Pipeline definition
|
|
||||||
+ Environment definition
|
|
||||||
+ Example data
|
|
||||||
+ Any log files.
|
|
||||||
+ Run and Workspace Id
|
|
||||||
|
|
||||||
**To Reproduce**
|
|
||||||
Steps to reproduce the behavior:
|
|
||||||
1.
|
|
||||||
|
|
||||||
**Expected behavior**
|
|
||||||
A clear and concise description of what you expected to happen.
|
|
||||||
|
|
||||||
**Additional context**
|
|
||||||
Add any other context about the problem here.
|
|
||||||
43
.github/ISSUE_TEMPLATE/notebook-issue.md
vendored
@@ -1,43 +0,0 @@
|
|||||||
---
|
|
||||||
name: Notebook issue
|
|
||||||
about: Describe your notebook issue
|
|
||||||
title: "[Notebook] DESCRIPTIVE TITLE"
|
|
||||||
labels: notebook
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### DESCRIPTION: Describe clearly + concisely
|
|
||||||
|
|
||||||
|
|
||||||
.
|
|
||||||
### REPRODUCIBLE: Steps
|
|
||||||
|
|
||||||
|
|
||||||
.
|
|
||||||
### EXPECTATION: Clear description
|
|
||||||
|
|
||||||
|
|
||||||
.
|
|
||||||
### CONFIG/ENVIRONMENT:
|
|
||||||
```Provide where applicable
|
|
||||||
|
|
||||||
## Your Python & SDK version:
|
|
||||||
|
|
||||||
## Environment definition:
|
|
||||||
|
|
||||||
## Notebook name or Python scripts:
|
|
||||||
|
|
||||||
## Run and Workspace Id:
|
|
||||||
|
|
||||||
## Pipeline definition:
|
|
||||||
|
|
||||||
## Example data:
|
|
||||||
|
|
||||||
## Any log files:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
||||||
21
README.md
@@ -1,17 +1,9 @@
|
|||||||
---
|
|
||||||
page_type: sample
|
|
||||||
languages:
|
|
||||||
- python
|
|
||||||
products:
|
|
||||||
- azure
|
|
||||||
- azure-machine-learning-service
|
|
||||||
description: "With Azure Machine Learning service, learn to prep data, train, test, deploy, manage, and track machine learning models in a cloud-based environment."
|
|
||||||
---
|
|
||||||
|
|
||||||
# Azure Machine Learning service example notebooks
|
# Azure Machine Learning service example notebooks
|
||||||
|
|
||||||
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
|
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
## Quick installation
|
## Quick installation
|
||||||
```sh
|
```sh
|
||||||
@@ -58,13 +50,18 @@ The [How to use Azure ML](./how-to-use-azureml) folder contains specific example
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
|
## Community Repository
|
||||||
|
Visit this [community repository](https://github.com/microsoft/MLOps/tree/master/examples) to find useful end-to-end sample notebooks. Also, please follow these [contribution guidelines](https://github.com/microsoft/MLOps/blob/master/contributing.md) when contributing to this repository.
|
||||||
|
|
||||||
## Projects using Azure Machine Learning
|
## Projects using Azure Machine Learning
|
||||||
|
|
||||||
Visit following repos to see projects contributed by Azure ML users:
|
Visit following repos to see projects contributed by Azure ML users:
|
||||||
|
|
||||||
- [AMLSamples](https://github.com/Azure/AMLSamples) Number of end-to-end examples, including face recognition, predictive maintenance, customer churn and sentiment analysis.
|
- [AMLSamples](https://github.com/Azure/AMLSamples) Number of end-to-end examples, including face recognition, predictive maintenance, customer churn and sentiment analysis.
|
||||||
- [Fine tune natural language processing models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
|
- [Learn about Natural Language Processing best practices using Azure Machine Learning service](https://github.com/microsoft/nlp)
|
||||||
|
- [Pre-Train BERT models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
|
||||||
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
|
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
|
||||||
|
- [UMass Amherst Student Samples](https://github.com/katiehouse3/microsoft-azure-ml-notebooks) - A number of end-to-end machine learning notebooks, including machine translation, image classification, and customer churn, created by students in the 696DS course at UMass Amherst.
|
||||||
|
|
||||||
## Data/Telemetry
|
## Data/Telemetry
|
||||||
This repository collects usage data and sends it to Mircosoft to help improve our products and services. Read Microsoft's [privacy statement to learn more](https://privacy.microsoft.com/en-US/privacystatement)
|
This repository collects usage data and sends it to Mircosoft to help improve our products and services. Read Microsoft's [privacy statement to learn more](https://privacy.microsoft.com/en-US/privacystatement)
|
||||||
|
|||||||
@@ -103,7 +103,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using version 1.0.57 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.0.69 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -1,307 +0,0 @@
|
|||||||
## How to use the RAPIDS on AzureML materials
|
|
||||||
### Setting up requirements
|
|
||||||
The material requires the use of the Azure ML SDK and of the Jupyter Notebook Server to run the interactive execution. Please refer to instructions to [setup the environment.](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local "Local Computer Set Up") Follow the instructions under **Local Computer**, make sure to run the last step: <span style="font-family: Courier New;">pip install \<new package\></span> with <span style="font-family: Courier New;">new package = progressbar2 (pip install progressbar2)</span>
|
|
||||||
|
|
||||||
After following the directions, the user should end up setting a conda environment (<span style="font-family: Courier New;">myenv</span>)that can be activated in an Anaconda prompt
|
|
||||||
|
|
||||||
The user would also require an Azure Subscription with a Machine Learning Services quota on the desired region for 24 nodes or more (to be able to select a vmSize with 4 GPUs as it is used on the Notebook) on the desired VM family ([NC\_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC\_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview)), the specific vmSize to be used within the chosen family would also need to be whitelisted for Machine Learning Services usage.
|
|
||||||
|
|
||||||
|
|
||||||
### Getting and running the material
|
|
||||||
Clone the AzureML Notebooks repository in GitHub by running the following command on a local_directory:
|
|
||||||
|
|
||||||
* C:\local_directory>git clone https://github.com/Azure/MachineLearningNotebooks.git
|
|
||||||
|
|
||||||
On a conda prompt navigate to the local directory, activate the conda environment (<span style="font-family: Courier New;">myenv</span>), where the Azure ML SDK was installed and launch Jupyter Notebook.
|
|
||||||
|
|
||||||
* (<span style="font-family: Courier New;">myenv</span>) C:\local_directory>jupyter notebook
|
|
||||||
|
|
||||||
From the resulting browser at http://localhost:8888/tree, navigate to the master notebook:
|
|
||||||
|
|
||||||
* http://localhost:8888/tree/MachineLearningNotebooks/contrib/RAPIDS/azure-ml-with-nvidia-rapids.ipynb
|
|
||||||
|
|
||||||
|
|
||||||
The following notebook will appear:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
### Master Jupyter Notebook
|
|
||||||
The notebook can be executed interactively step by step, by pressing the Run button (In a red circle in the above image.)
|
|
||||||
|
|
||||||
The first couple of functional steps import the necessary AzureML libraries. If you experience any errors please refer back to the [setup the environment.](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local "Local Computer Set Up") instructions.
|
|
||||||
|
|
||||||
|
|
||||||
#### Setting up a Workspace
|
|
||||||
The following step gathers the information necessary to set up a workspace to execute the RAPIDS script. This needs to be done only once, or not at all if you already have a workspace you can use set up on the Azure Portal:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
It is important to be sure to set the correct values for the subscription\_id, resource\_group, workspace\_name, and region before executing the step. An example is:
|
|
||||||
|
|
||||||
subscription_id = os.environ.get("SUBSCRIPTION_ID", "1358e503-xxxx-4043-xxxx-65b83xxxx32d")
|
|
||||||
resource_group = os.environ.get("RESOURCE_GROUP", "AML-Rapids-Testing")
|
|
||||||
workspace_name = os.environ.get("WORKSPACE_NAME", "AML_Rapids_Tester")
|
|
||||||
workspace_region = os.environ.get("WORKSPACE_REGION", "West US 2")
|
|
||||||
|
|
||||||
|
|
||||||
The resource\_group and workspace_name could take any value, the region should match the region for which the subscription has the required Machine Learning Services node quota.
|
|
||||||
|
|
||||||
The first time the code is executed it will redirect to the Azure Portal to validate subscription credentials. After the workspace is created, its related information is stored on a local file so that this step can be subsequently skipped. The immediate step will just load the saved workspace
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Once a workspace has been created the user could skip its creation and just jump to this step. The configuration file resides in:
|
|
||||||
|
|
||||||
* C:\local_directory\\MachineLearningNotebooks\contrib\RAPIDS\aml_config\config.json
|
|
||||||
|
|
||||||
|
|
||||||
#### Creating an AML Compute Target
|
|
||||||
Following step, creates an AML Compute Target
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Parameter vm\_size on function call AmlCompute.provisioning\_configuration() has to be a member of the VM families ([NC\_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC\_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview)) that are the ones provided with P40 or V100 GPUs, that are the ones supported by RAPIDS. In this particular case an Standard\_NC24s\_V2 was used.
|
|
||||||
|
|
||||||
|
|
||||||
If the output of running the step has an error of the form:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
It is an indication that even though the subscription has a node quota for VMs for that family, it does not have a node quota for Machine Learning Services for that family.
|
|
||||||
You will need to request an increase node quota for that family in that region for **Machine Learning Services**.
|
|
||||||
|
|
||||||
|
|
||||||
Another possible error is the following:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Which indicates that specified vmSize has not been whitelisted for usage on Machine Learning Services and a request to do so should be filled.
|
|
||||||
|
|
||||||
The successful creation of the compute target would have an output like the following:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
#### RAPIDS script uploading and viewing
|
|
||||||
The next step copies the RAPIDS script process_data.py, which is a slightly modified implementation of the [RAPIDS E2E example](https://github.com/rapidsai/notebooks/blob/master/mortgage/E2E.ipynb), into a script processing folder and it presents its contents to the user. (The script is discussed in the next section in detail).
|
|
||||||
If the user wants to use a different RAPIDS script, the references to the <span style="font-family: Courier New;">process_data.py</span> script have to be changed
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
#### Data Uploading
|
|
||||||
The RAPIDS script loads and extracts features from the Fannie Mae’s Mortgage Dataset to train an XGBoost prediction model. The script uses two years of data
|
|
||||||
|
|
||||||
The next few steps download and decompress the data and is made available to the script as an [Azure Machine Learning Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data).
|
|
||||||
|
|
||||||
|
|
||||||
The following functions are used to download and decompress the input data
|
|
||||||
|
|
||||||
|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
The next step uses those functions to download locally file:
|
|
||||||
http://rapidsai-data.s3-website.us-east-2.amazonaws.com/notebook-mortgage-data/mortgage_2000-2001.tgz'
|
|
||||||
And to decompress it, into local folder path = .\mortgage_2000-2001
|
|
||||||
The step takes several minutes, the intermediate outputs provide progress indicators.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
The decompressed data should have the following structure:
|
|
||||||
* .\mortgage_2000-2001\acq\Acquisition_<year>Q<num>.txt
|
|
||||||
* .\mortgage_2000-2001\perf\Performance_<year>Q<num>.txt
|
|
||||||
* .\mortgage_2000-2001\names.csv
|
|
||||||
|
|
||||||
The data is divided in partitions that roughly correspond to yearly quarters. RAPIDS includes support for multi-node, multi-GPU deployments, enabling scaling up and out on much larger dataset sizes. The user will be able to verify that the number of partitions that the script is able to process increases with the number of GPUs used. The RAPIDS script is implemented for single-machine scenarios. An example supporting multiple nodes will be published later.
|
|
||||||
|
|
||||||
|
|
||||||
The next step upload the data into the [Azure Machine Learning Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data) under reference <span style="font-family: Courier New;">fileroot = mortgage_2000-2001</span>
|
|
||||||
|
|
||||||
The step takes several minutes to load the data, the output provides a progress indicator.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Once the data has been loaded into the Azure Machine LEarning Data Store, in subsequent run, the user can comment out the ds.upload line and just make reference to the <span style="font-family: Courier New;">mortgage_2000-2001</blog> data store reference
|
|
||||||
|
|
||||||
|
|
||||||
#### Setting up required libraries and environment to run RAPIDS code
|
|
||||||
There are two options to setup the environment to run RAPIDS code. The following steps shows how to ues a prebuilt conda environment. A recommended alternative is to specify a base Docker image and package dependencies. You can find sample code for that in the notebook.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
#### Wrapper function to submit the RAPIDS script as an Azure Machine Learning experiment
|
|
||||||
|
|
||||||
The next step consists of the definition of a wrapper function to be used when the user attempts to run the RAPIDS script with different arguments. It takes as arguments: <span style="font-family: Times New Roman;">*cpu\_training*</span>; a flag that indicates if the run is meant to be processed with CPU-only, <span style="font-family: Times New Roman;">*gpu\_count*</span>; the number of GPUs to be used if they are meant to be used and part_count: the number of data partitions to be used
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
The core of the function resides in configuring the run by the instantiation of a ScriptRunConfig object, which defines the source_directory for the script to be executed, the name of the script and the arguments to be passed to the script.
|
|
||||||
In addition to the wrapper function arguments, two other arguments are passed: <span style="font-family: Times New Roman;">*data\_dir*</span>, the directory where the data is stored and <span style="font-family: Times New Roman;">*end_year*</span> is the largest year to use partition from.
|
|
||||||
|
|
||||||
|
|
||||||
As mentioned earlier the size of the data that can be processed increases with the number of gpus, in the function, dictionary <span style="font-family: Times New Roman;">*max\_gpu\_count\_data\_partition_mapping*</span> maps the maximum number of partitions that we empirically found that the system can handle given the number of GPUs used. The function throws a warning when the number of partitions for a given number of gpus exceeds the maximum but the script is still executed, however the user should expect an error as an out of memory situation would be encountered
|
|
||||||
If the user wants to use a different RAPIDS script, the reference to the process_data.py script has to be changed
|
|
||||||
|
|
||||||
|
|
||||||
#### Submitting Experiments
|
|
||||||
We are ready to submit experiments: launching the RAPIDS script with different sets of parameters.
|
|
||||||
|
|
||||||
|
|
||||||
The following couple of steps submit experiments under different conditions.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
The user can change variable num\_gpu between one and the number of GPUs supported by the chosen vmSize. Variable part\_count can take any value between 1 and 11, but if it exceeds the maximum for num_gpu, the run would result in an error
|
|
||||||
|
|
||||||
|
|
||||||
If the experiment is successfully submitted, it would be placed on a queue for processing, its status would appeared as Queued and an output like the following would appear
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
When the experiment starts running, its status would appeared as Running and the output would change to something like this:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
#### Reproducing the performance gains plot results on the Blog Post
|
|
||||||
When the run has finished successfully, its status would appeared as Completed and the output would change to something like this:
|
|
||||||
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Which is the output for an experiment run with three partitions and one GPU, notice that the reported processing time is 49.16 seconds just as depicted on the performance gains plot on the blog post
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
This output corresponds to a run with three partitions and two GPUs, notice that the reported processing time is 37.50 seconds just as depicted on the performance gains plot on the blog post
|
|
||||||
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
This output corresponds to an experiment run with three partitions and three GPUs, notice that the reported processing time is 24.40 seconds just as depicted on the performance gains plot on the blog post
|
|
||||||
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
This output corresponds to an experiment run with three partitions and four GPUs, notice that the reported processing time is 23.33 seconds just as depicted on the performance gains plot on the blogpost
|
|
||||||
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
This output corresponds to an experiment run with three partitions and using only CPU, notice that the reported processing time is 9 minutes and 1.21 seconds or 541.21 second just as depicted on the performance gains plot on the blog post
|
|
||||||
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
This output corresponds to an experiment run with nine partitions and four GPUs, notice that the notebook throws a warning signaling that the number of partitions exceed the maximum that the system can handle with those many GPUs and the run ends up failing, hence having and status of Failed.
|
|
||||||
|
|
||||||
|
|
||||||
##### Freeing Resources
|
|
||||||
In the last step the notebook deletes the compute target. (This step is optional especially if the min_nodes in the cluster is set to 0 with which the cluster will scale down to 0 nodes when there is no usage.)
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
### RAPIDS Script
|
|
||||||
The Master Notebook runs experiments by launching a RAPIDS script with different sets of parameters. In this section, the RAPIDS script, process_data.py in the material, is analyzed
|
|
||||||
|
|
||||||
The script first imports all the necessary libraries and parses the arguments passed by the Master Notebook.
|
|
||||||
|
|
||||||
The all internal functions to be used by the script are defined.
|
|
||||||
|
|
||||||
|
|
||||||
#### Wrapper Auxiliary Functions:
|
|
||||||
The below functions are wrappers for a configuration module for librmm, the RAPIDS Memory Manager python interface:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
A couple of other functions are wrappers for the submission of jobs to the DASK client:
|
|
||||||
|
|
||||||

|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
#### Data Loading Functions:
|
|
||||||
The data is loaded through the use of the following three functions
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
All three functions use library function cudf.read_csv(), cuDF version for the well known counterpart on Pandas.
|
|
||||||
|
|
||||||
|
|
||||||
#### Data Transformation and Feature Extraction Functions:
|
|
||||||
The raw data is transformed and processed to extract features by joining, slicing, grouping, aggregating, factoring, etc, the original dataframes just as is done with Pandas. The following functions in the script are used for that purpose:
|
|
||||||

|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
#### Main() Function
|
|
||||||
The previous functions are used in the Main function to accomplish several steps: Set up the Dask client, do all ETL operations, set up and train an XGBoost model, the function also assigns which data needs to be processed by each Dask client
|
|
||||||
|
|
||||||
|
|
||||||
##### Setting Up DASK client:
|
|
||||||
The following lines:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
Initialize and set up a DASK client with a number of workers corresponding to the number of GPUs to be used on the run. A successful execution of the set up will result on the following output:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
##### All ETL functions are used on single calls to process\_quarter_gpu, one per data partition
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
##### Concentrating the data assigned to each DASK worker
|
|
||||||
The partitions assigned to each worker are concatenated and set up for training.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
##### Setting Training Parameters
|
|
||||||
The parameters used for the training of a gradient boosted decision tree model are set up in the following code block:
|
|
||||||

|
|
||||||
|
|
||||||
Notice how the parameters are modified when using the CPU-only mode.
|
|
||||||
|
|
||||||
|
|
||||||
##### Launching the training of a gradient boosted decision tree model using XGBoost.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
The outputs of the script can be observed in the master notebook as the script is executed
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -9,6 +9,13 @@
|
|||||||
"Licensed under the MIT License."
|
"Licensed under the MIT License."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
""
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -20,7 +27,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The [RAPIDS](https://www.developer.nvidia.com/rapids) suite of software libraries from NVIDIA enables the execution of end-to-end data science and analytics pipelines entirely on GPUs. In many machine learning projects, a significant portion of the model training time is spent in setting up the data; this stage of the process is known as Extraction, Transformation and Loading, or ETL. By using the DataFrame API for ETL\u00c3\u201a\u00c2\u00a0and GPU-capable ML algorithms in RAPIDS, data preparation and training models can be done in GPU-accelerated end-to-end pipelines without incurring serialization costs between the pipeline stages. This notebook demonstrates how to use NVIDIA RAPIDS to prepare data and train model\u00c2\u00a0in Azure.\n",
|
"The [RAPIDS](https://www.developer.nvidia.com/rapids) suite of software libraries from NVIDIA enables the execution of end-to-end data science and analytics pipelines entirely on GPUs. In many machine learning projects, a significant portion of the model training time is spent in setting up the data; this stage of the process is known as Extraction, Transformation and Loading, or ETL. By using the DataFrame API for ETL\u00c2\u00a0and GPU-capable ML algorithms in RAPIDS, data preparation and training models can be done in GPU-accelerated end-to-end pipelines without incurring serialization costs between the pipeline stages. This notebook demonstrates how to use NVIDIA RAPIDS to prepare data and train model\u00c3\u201a\u00c2\u00a0in Azure.\n",
|
||||||
" \n",
|
" \n",
|
||||||
"In this notebook, we will do the following:\n",
|
"In this notebook, we will do the following:\n",
|
||||||
" \n",
|
" \n",
|
||||||
@@ -119,8 +126,10 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
|
"\n",
|
||||||
"# if a locally-saved configuration file for the workspace is not available, use the following to load workspace\n",
|
"# if a locally-saved configuration file for the workspace is not available, use the following to load workspace\n",
|
||||||
"# ws = Workspace(subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name)\n",
|
"# ws = Workspace(subscription_id=subscription_id, resource_group=resource_group, workspace_name=workspace_name)\n",
|
||||||
|
"\n",
|
||||||
"print('Workspace name: ' + ws.name, \n",
|
"print('Workspace name: ' + ws.name, \n",
|
||||||
" 'Azure region: ' + ws.location, \n",
|
" 'Azure region: ' + ws.location, \n",
|
||||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||||
@@ -161,7 +170,7 @@
|
|||||||
"if gpu_cluster_name in ws.compute_targets:\n",
|
"if gpu_cluster_name in ws.compute_targets:\n",
|
||||||
" gpu_cluster = ws.compute_targets[gpu_cluster_name]\n",
|
" gpu_cluster = ws.compute_targets[gpu_cluster_name]\n",
|
||||||
" if gpu_cluster and type(gpu_cluster) is AmlCompute:\n",
|
" if gpu_cluster and type(gpu_cluster) is AmlCompute:\n",
|
||||||
" print('found compute target. just use it. ' + gpu_cluster_name)\n",
|
" print('Found compute target. Will use {0} '.format(gpu_cluster_name))\n",
|
||||||
"else:\n",
|
"else:\n",
|
||||||
" print(\"creating new cluster\")\n",
|
" print(\"creating new cluster\")\n",
|
||||||
" # vm_size parameter below could be modified to one of the RAPIDS-supported VM types\n",
|
" # vm_size parameter below could be modified to one of the RAPIDS-supported VM types\n",
|
||||||
@@ -183,7 +192,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The _process_data.py_ script used in the step below is a slightly modified implementation of [RAPIDS E2E example](https://github.com/rapidsai/notebooks/blob/master/mortgage/E2E.ipynb)."
|
"The _process_data.py_ script used in the step below is a slightly modified implementation of [RAPIDS Mortgage E2E example](https://github.com/rapidsai/notebooks-contrib/blob/master/intermediate_notebooks/E2E/mortgage/mortgage_e2e.ipynb)."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -194,10 +203,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# copy process_data.py into the script folder\n",
|
"# copy process_data.py into the script folder\n",
|
||||||
"import shutil\n",
|
"import shutil\n",
|
||||||
"shutil.copy('./process_data.py', os.path.join(scripts_folder, 'process_data.py'))\n",
|
"shutil.copy('./process_data.py', os.path.join(scripts_folder, 'process_data.py'))"
|
||||||
"\n",
|
|
||||||
"with open(os.path.join(scripts_folder, './process_data.py'), 'r') as process_data_script:\n",
|
|
||||||
" print(process_data_script.read())"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -221,13 +227,6 @@
|
|||||||
"### Downloading Data"
|
"### Downloading Data"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"<font color='red'>Important</font>: Python package progressbar2 is necessary to run the following cell. If it is not available in your environment where this notebook is running, please install it."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
@@ -237,7 +236,6 @@
|
|||||||
"import tarfile\n",
|
"import tarfile\n",
|
||||||
"import hashlib\n",
|
"import hashlib\n",
|
||||||
"from urllib.request import urlretrieve\n",
|
"from urllib.request import urlretrieve\n",
|
||||||
"from progressbar import ProgressBar\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"def validate_downloaded_data(path):\n",
|
"def validate_downloaded_data(path):\n",
|
||||||
" if(os.path.isdir(path) and os.path.exists(path + '//names.csv')) :\n",
|
" if(os.path.isdir(path) and os.path.exists(path + '//names.csv')) :\n",
|
||||||
@@ -267,7 +265,7 @@
|
|||||||
" url_format = 'http://rapidsai-data.s3-website.us-east-2.amazonaws.com/notebook-mortgage-data/{0}.tgz'\n",
|
" url_format = 'http://rapidsai-data.s3-website.us-east-2.amazonaws.com/notebook-mortgage-data/{0}.tgz'\n",
|
||||||
" url = url_format.format(fileroot)\n",
|
" url = url_format.format(fileroot)\n",
|
||||||
" print(\"...Downloading file :{0}\".format(filename))\n",
|
" print(\"...Downloading file :{0}\".format(filename))\n",
|
||||||
" urlretrieve(url, filename,show_progress)\n",
|
" urlretrieve(url, filename)\n",
|
||||||
" pbar.finish()\n",
|
" pbar.finish()\n",
|
||||||
" print(\"...File :{0} finished downloading\".format(filename))\n",
|
" print(\"...File :{0} finished downloading\".format(filename))\n",
|
||||||
" else:\n",
|
" else:\n",
|
||||||
@@ -282,9 +280,7 @@
|
|||||||
" so_far = 0\n",
|
" so_far = 0\n",
|
||||||
" for member_info in members:\n",
|
" for member_info in members:\n",
|
||||||
" tar.extract(member_info,path=path)\n",
|
" tar.extract(member_info,path=path)\n",
|
||||||
" show_progress(so_far, 1, numFiles)\n",
|
|
||||||
" so_far += 1\n",
|
" so_far += 1\n",
|
||||||
" pbar.finish()\n",
|
|
||||||
" print(\"...All {0} files have been decompressed\".format(numFiles))\n",
|
" print(\"...All {0} files have been decompressed\".format(numFiles))\n",
|
||||||
" tar.close()"
|
" tar.close()"
|
||||||
]
|
]
|
||||||
@@ -324,7 +320,9 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# download and uncompress data in a local directory before uploading to data store\n",
|
"# download and uncompress data in a local directory before uploading to data store\n",
|
||||||
"# directory specified in src_dir parameter below should have the acq, perf directories with data and names.csv file\n",
|
"# directory specified in src_dir parameter below should have the acq, perf directories with data and names.csv file\n",
|
||||||
"ds.upload(src_dir=path, target_path=fileroot, overwrite=True, show_progress=True)\n",
|
"\n",
|
||||||
|
"# ---->>>> UNCOMMENT THE BELOW LINE TO UPLOAD YOUR DATA IF NOT DONE SO ALREADY <<<<----\n",
|
||||||
|
"# ds.upload(src_dir=path, target_path=fileroot, overwrite=True, show_progress=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# data already uploaded to the datastore\n",
|
"# data already uploaded to the datastore\n",
|
||||||
"data_ref = DataReference(data_reference_name='data', datastore=ds, path_on_datastore=fileroot)"
|
"data_ref = DataReference(data_reference_name='data', datastore=ds, path_on_datastore=fileroot)"
|
||||||
@@ -360,7 +358,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The following code shows how to use an existing image from [Docker Hub](https://hub.docker.com/r/rapidsai/rapidsai/) that has a prebuilt conda environment named 'rapids' when creating a RunConfiguration. Note that this conda environment does not include azureml-defaults package that is required for using AML functionality like metrics tracking, model management etc. This package is automatically installed when you use 'Specify package dependencies' option and that is why it is the recommended option to create RunConfiguraiton in AML."
|
"The following code shows how to install RAPIDS using conda. The `rapids.yml` file contains the list of packages necessary to run this tutorial. **NOTE:** Initial build of the image might take up to 20 minutes as the service needs to build and cache the new image; once the image is built the subequent runs use the cached image and the overhead is minimal."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -369,17 +367,13 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"run_config = RunConfiguration()\n",
|
"cd = CondaDependencies(conda_dependencies_file_path='rapids.yml')\n",
|
||||||
|
"run_config = RunConfiguration(conda_dependencies=cd)\n",
|
||||||
"run_config.framework = 'python'\n",
|
"run_config.framework = 'python'\n",
|
||||||
"run_config.environment.python.user_managed_dependencies = True\n",
|
|
||||||
"run_config.environment.python.interpreter_path = '/conda/envs/rapids/bin/python'\n",
|
|
||||||
"run_config.target = gpu_cluster_name\n",
|
"run_config.target = gpu_cluster_name\n",
|
||||||
"run_config.environment.docker.enabled = True\n",
|
"run_config.environment.docker.enabled = True\n",
|
||||||
"run_config.environment.docker.gpu_support = True\n",
|
"run_config.environment.docker.gpu_support = True\n",
|
||||||
"run_config.environment.docker.base_image = \"rapidsai/rapidsai:cuda9.2-runtime-ubuntu18.04\"\n",
|
"run_config.environment.docker.base_image = \"mcr.microsoft.com/azureml/base-gpu:intelmpi2018.3-cuda10.0-cudnn7-ubuntu16.04\"\n",
|
||||||
"# run_config.environment.docker.base_image_registry.address = '<registry_url>' # not required if the base_image is in Docker hub\n",
|
|
||||||
"# run_config.environment.docker.base_image_registry.username = '<user_name>' # needed only for private images\n",
|
|
||||||
"# run_config.environment.docker.base_image_registry.password = '<password>' # needed only for private images\n",
|
|
||||||
"run_config.environment.spark.precache_packages = False\n",
|
"run_config.environment.spark.precache_packages = False\n",
|
||||||
"run_config.data_references={'data':data_ref.to_config()}"
|
"run_config.data_references={'data':data_ref.to_config()}"
|
||||||
]
|
]
|
||||||
@@ -388,14 +382,14 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"#### Specify package dependencies"
|
"#### Using Docker"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The following code shows how to list package dependencies in a conda environment definition file (rapids.yml) when creating a RunConfiguration"
|
"Alternatively, you can specify RAPIDS Docker image."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -404,16 +398,17 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# cd = CondaDependencies(conda_dependencies_file_path='rapids.yml')\n",
|
"# run_config = RunConfiguration()\n",
|
||||||
"# run_config = RunConfiguration(conda_dependencies=cd)\n",
|
|
||||||
"# run_config.framework = 'python'\n",
|
"# run_config.framework = 'python'\n",
|
||||||
|
"# run_config.environment.python.user_managed_dependencies = True\n",
|
||||||
|
"# run_config.environment.python.interpreter_path = '/conda/envs/rapids/bin/python'\n",
|
||||||
"# run_config.target = gpu_cluster_name\n",
|
"# run_config.target = gpu_cluster_name\n",
|
||||||
"# run_config.environment.docker.enabled = True\n",
|
"# run_config.environment.docker.enabled = True\n",
|
||||||
"# run_config.environment.docker.gpu_support = True\n",
|
"# run_config.environment.docker.gpu_support = True\n",
|
||||||
"# run_config.environment.docker.base_image = \"<image>\"\n",
|
"# run_config.environment.docker.base_image = \"rapidsai/rapidsai:cuda9.2-runtime-ubuntu18.04\"\n",
|
||||||
"# run_config.environment.docker.base_image_registry.address = '<registry_url>' # not required if the base_image is in Docker hub\n",
|
"# # run_config.environment.docker.base_image_registry.address = '<registry_url>' # not required if the base_image is in Docker hub\n",
|
||||||
"# run_config.environment.docker.base_image_registry.username = '<user_name>' # needed only for private images\n",
|
"# # run_config.environment.docker.base_image_registry.username = '<user_name>' # needed only for private images\n",
|
||||||
"# run_config.environment.docker.base_image_registry.password = '<password>' # needed only for private images\n",
|
"# # run_config.environment.docker.base_image_registry.password = '<password>' # needed only for private images\n",
|
||||||
"# run_config.environment.spark.precache_packages = False\n",
|
"# run_config.environment.spark.precache_packages = False\n",
|
||||||
"# run_config.data_references={'data':data_ref.to_config()}"
|
"# run_config.data_references={'data':data_ref.to_config()}"
|
||||||
]
|
]
|
||||||
@@ -551,9 +546,9 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.6.6"
|
"version": "3.6.8"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 2
|
"nbformat_minor": 4
|
||||||
}
|
}
|
||||||
|
Before Width: | Height: | Size: 180 KiB |
|
Before Width: | Height: | Size: 183 KiB |
|
Before Width: | Height: | Size: 183 KiB |
|
Before Width: | Height: | Size: 177 KiB |
|
Before Width: | Height: | Size: 5.0 KiB |
|
Before Width: | Height: | Size: 4.8 KiB |
|
Before Width: | Height: | Size: 3.2 KiB |
|
Before Width: | Height: | Size: 70 KiB |
|
Before Width: | Height: | Size: 64 KiB |
|
Before Width: | Height: | Size: 554 KiB |
|
Before Width: | Height: | Size: 213 KiB |
|
Before Width: | Height: | Size: 58 KiB |
|
Before Width: | Height: | Size: 34 KiB |
|
Before Width: | Height: | Size: 4.5 KiB |
|
Before Width: | Height: | Size: 187 KiB |
|
Before Width: | Height: | Size: 22 KiB |
|
Before Width: | Height: | Size: 9.7 KiB |
|
Before Width: | Height: | Size: 163 KiB |
|
Before Width: | Height: | Size: 3.5 KiB |
|
Before Width: | Height: | Size: 2.9 KiB |
|
Before Width: | Height: | Size: 2.5 KiB |
|
Before Width: | Height: | Size: 3.0 KiB |
|
Before Width: | Height: | Size: 60 KiB |
|
Before Width: | Height: | Size: 3.5 KiB |
|
Before Width: | Height: | Size: 3.9 KiB |
|
Before Width: | Height: | Size: 5.0 KiB |
|
Before Width: | Height: | Size: 4.0 KiB |
|
Before Width: | Height: | Size: 4.1 KiB |
|
Before Width: | Height: | Size: 4.5 KiB |
|
Before Width: | Height: | Size: 5.1 KiB |
|
Before Width: | Height: | Size: 3.9 KiB |
|
Before Width: | Height: | Size: 3.6 KiB |
|
Before Width: | Height: | Size: 120 KiB |
|
Before Width: | Height: | Size: 55 KiB |
|
Before Width: | Height: | Size: 52 KiB |
|
Before Width: | Height: | Size: 181 KiB |
|
Before Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 21 KiB |
|
Before Width: | Height: | Size: 19 KiB |
|
Before Width: | Height: | Size: 45 KiB |
|
Before Width: | Height: | Size: 31 KiB |
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 10 KiB |
|
Before Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 2.4 KiB |
|
Before Width: | Height: | Size: 2.5 KiB |
|
Before Width: | Height: | Size: 3.4 KiB |
|
Before Width: | Height: | Size: 4.8 KiB |
|
Before Width: | Height: | Size: 99 KiB |
@@ -15,21 +15,6 @@ from glob import glob
|
|||||||
import os
|
import os
|
||||||
import argparse
|
import argparse
|
||||||
|
|
||||||
def initialize_rmm_pool():
|
|
||||||
from librmm_cffi import librmm_config as rmm_cfg
|
|
||||||
|
|
||||||
rmm_cfg.use_pool_allocator = True
|
|
||||||
#rmm_cfg.initial_pool_size = 2<<30 # set to 2GiB. Default is 1/2 total GPU memory
|
|
||||||
import cudf
|
|
||||||
return cudf._gdf.rmm_initialize()
|
|
||||||
|
|
||||||
def initialize_rmm_no_pool():
|
|
||||||
from librmm_cffi import librmm_config as rmm_cfg
|
|
||||||
|
|
||||||
rmm_cfg.use_pool_allocator = False
|
|
||||||
import cudf
|
|
||||||
return cudf._gdf.rmm_initialize()
|
|
||||||
|
|
||||||
def run_dask_task(func, **kwargs):
|
def run_dask_task(func, **kwargs):
|
||||||
task = func(**kwargs)
|
task = func(**kwargs)
|
||||||
return task
|
return task
|
||||||
@@ -207,26 +192,26 @@ def gpu_load_names(col_path):
|
|||||||
|
|
||||||
def create_ever_features(gdf, **kwargs):
|
def create_ever_features(gdf, **kwargs):
|
||||||
everdf = gdf[['loan_id', 'current_loan_delinquency_status']]
|
everdf = gdf[['loan_id', 'current_loan_delinquency_status']]
|
||||||
everdf = everdf.groupby('loan_id', method='hash').max()
|
everdf = everdf.groupby('loan_id', method='hash').max().reset_index()
|
||||||
del(gdf)
|
del(gdf)
|
||||||
everdf['ever_30'] = (everdf['max_current_loan_delinquency_status'] >= 1).astype('int8')
|
everdf['ever_30'] = (everdf['current_loan_delinquency_status'] >= 1).astype('int8')
|
||||||
everdf['ever_90'] = (everdf['max_current_loan_delinquency_status'] >= 3).astype('int8')
|
everdf['ever_90'] = (everdf['current_loan_delinquency_status'] >= 3).astype('int8')
|
||||||
everdf['ever_180'] = (everdf['max_current_loan_delinquency_status'] >= 6).astype('int8')
|
everdf['ever_180'] = (everdf['current_loan_delinquency_status'] >= 6).astype('int8')
|
||||||
everdf.drop_column('max_current_loan_delinquency_status')
|
everdf.drop_column('current_loan_delinquency_status')
|
||||||
return everdf
|
return everdf
|
||||||
|
|
||||||
def create_delinq_features(gdf, **kwargs):
|
def create_delinq_features(gdf, **kwargs):
|
||||||
delinq_gdf = gdf[['loan_id', 'monthly_reporting_period', 'current_loan_delinquency_status']]
|
delinq_gdf = gdf[['loan_id', 'monthly_reporting_period', 'current_loan_delinquency_status']]
|
||||||
del(gdf)
|
del(gdf)
|
||||||
delinq_30 = delinq_gdf.query('current_loan_delinquency_status >= 1')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
|
delinq_30 = delinq_gdf.query('current_loan_delinquency_status >= 1')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min().reset_index()
|
||||||
delinq_30['delinquency_30'] = delinq_30['min_monthly_reporting_period']
|
delinq_30['delinquency_30'] = delinq_30['monthly_reporting_period']
|
||||||
delinq_30.drop_column('min_monthly_reporting_period')
|
delinq_30.drop_column('monthly_reporting_period')
|
||||||
delinq_90 = delinq_gdf.query('current_loan_delinquency_status >= 3')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
|
delinq_90 = delinq_gdf.query('current_loan_delinquency_status >= 3')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min().reset_index()
|
||||||
delinq_90['delinquency_90'] = delinq_90['min_monthly_reporting_period']
|
delinq_90['delinquency_90'] = delinq_90['monthly_reporting_period']
|
||||||
delinq_90.drop_column('min_monthly_reporting_period')
|
delinq_90.drop_column('monthly_reporting_period')
|
||||||
delinq_180 = delinq_gdf.query('current_loan_delinquency_status >= 6')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min()
|
delinq_180 = delinq_gdf.query('current_loan_delinquency_status >= 6')[['loan_id', 'monthly_reporting_period']].groupby('loan_id', method='hash').min().reset_index()
|
||||||
delinq_180['delinquency_180'] = delinq_180['min_monthly_reporting_period']
|
delinq_180['delinquency_180'] = delinq_180['monthly_reporting_period']
|
||||||
delinq_180.drop_column('min_monthly_reporting_period')
|
delinq_180.drop_column('monthly_reporting_period')
|
||||||
del(delinq_gdf)
|
del(delinq_gdf)
|
||||||
delinq_merge = delinq_30.merge(delinq_90, how='left', on=['loan_id'], type='hash')
|
delinq_merge = delinq_30.merge(delinq_90, how='left', on=['loan_id'], type='hash')
|
||||||
delinq_merge['delinquency_90'] = delinq_merge['delinquency_90'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
|
delinq_merge['delinquency_90'] = delinq_merge['delinquency_90'].fillna(np.dtype('datetime64[ms]').type('1970-01-01').astype('datetime64[ms]'))
|
||||||
@@ -279,16 +264,15 @@ def create_joined_df(gdf, everdf, **kwargs):
|
|||||||
def create_12_mon_features(joined_df, **kwargs):
|
def create_12_mon_features(joined_df, **kwargs):
|
||||||
testdfs = []
|
testdfs = []
|
||||||
n_months = 12
|
n_months = 12
|
||||||
|
|
||||||
for y in range(1, n_months + 1):
|
for y in range(1, n_months + 1):
|
||||||
tmpdf = joined_df[['loan_id', 'timestamp_year', 'timestamp_month', 'delinquency_12', 'upb_12']]
|
tmpdf = joined_df[['loan_id', 'timestamp_year', 'timestamp_month', 'delinquency_12', 'upb_12']]
|
||||||
tmpdf['josh_months'] = tmpdf['timestamp_year'] * 12 + tmpdf['timestamp_month']
|
tmpdf['josh_months'] = tmpdf['timestamp_year'] * 12 + tmpdf['timestamp_month']
|
||||||
tmpdf['josh_mody_n'] = ((tmpdf['josh_months'].astype('float64') - 24000 - y) / 12).floor()
|
tmpdf['josh_mody_n'] = ((tmpdf['josh_months'].astype('float64') - 24000 - y) / 12).floor()
|
||||||
tmpdf = tmpdf.groupby(['loan_id', 'josh_mody_n'], method='hash').agg({'delinquency_12': 'max','upb_12': 'min'})
|
tmpdf = tmpdf.groupby(['loan_id', 'josh_mody_n'], method='hash').agg({'delinquency_12': 'max','upb_12': 'min'}).reset_index()
|
||||||
tmpdf['delinquency_12'] = (tmpdf['max_delinquency_12']>3).astype('int32')
|
tmpdf['delinquency_12'] = (tmpdf['delinquency_12']>3).astype('int32')
|
||||||
tmpdf['delinquency_12'] +=(tmpdf['min_upb_12']==0).astype('int32')
|
tmpdf['delinquency_12'] +=(tmpdf['upb_12']==0).astype('int32')
|
||||||
tmpdf.drop_column('max_delinquency_12')
|
tmpdf['upb_12'] = tmpdf['upb_12']
|
||||||
tmpdf['upb_12'] = tmpdf['min_upb_12']
|
|
||||||
tmpdf.drop_column('min_upb_12')
|
|
||||||
tmpdf['timestamp_year'] = (((tmpdf['josh_mody_n'] * n_months) + 24000 + (y - 1)) / 12).floor().astype('int16')
|
tmpdf['timestamp_year'] = (((tmpdf['josh_mody_n'] * n_months) + 24000 + (y - 1)) / 12).floor().astype('int16')
|
||||||
tmpdf['timestamp_month'] = np.int8(y)
|
tmpdf['timestamp_month'] = np.int8(y)
|
||||||
tmpdf.drop_column('josh_mody_n')
|
tmpdf.drop_column('josh_mody_n')
|
||||||
@@ -329,6 +313,7 @@ def last_mile_cleaning(df, **kwargs):
|
|||||||
'delinquency_30', 'delinquency_90', 'delinquency_180', 'upb_12',
|
'delinquency_30', 'delinquency_90', 'delinquency_180', 'upb_12',
|
||||||
'zero_balance_effective_date','foreclosed_after', 'disposition_date','timestamp'
|
'zero_balance_effective_date','foreclosed_after', 'disposition_date','timestamp'
|
||||||
]
|
]
|
||||||
|
|
||||||
for column in drop_list:
|
for column in drop_list:
|
||||||
df.drop_column(column)
|
df.drop_column(column)
|
||||||
for col, dtype in df.dtypes.iteritems():
|
for col, dtype in df.dtypes.iteritems():
|
||||||
@@ -342,7 +327,6 @@ def last_mile_cleaning(df, **kwargs):
|
|||||||
return df.to_arrow(preserve_index=False)
|
return df.to_arrow(preserve_index=False)
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
#print('XGBOOST_BUILD_DOC is ' + os.environ['XGBOOST_BUILD_DOC'])
|
|
||||||
parser = argparse.ArgumentParser("rapidssample")
|
parser = argparse.ArgumentParser("rapidssample")
|
||||||
parser.add_argument("--data_dir", type=str, help="location of data")
|
parser.add_argument("--data_dir", type=str, help="location of data")
|
||||||
parser.add_argument("--num_gpu", type=int, help="Number of GPUs to use", default=1)
|
parser.add_argument("--num_gpu", type=int, help="Number of GPUs to use", default=1)
|
||||||
@@ -364,7 +348,6 @@ def main():
|
|||||||
print('data_dir = {0}'.format(data_dir))
|
print('data_dir = {0}'.format(data_dir))
|
||||||
print('num_gpu = {0}'.format(num_gpu))
|
print('num_gpu = {0}'.format(num_gpu))
|
||||||
print('part_count = {0}'.format(part_count))
|
print('part_count = {0}'.format(part_count))
|
||||||
#part_count = part_count + 1 # adding one because the usage below is not inclusive
|
|
||||||
print('end_year = {0}'.format(end_year))
|
print('end_year = {0}'.format(end_year))
|
||||||
print('cpu_predictor = {0}'.format(cpu_predictor))
|
print('cpu_predictor = {0}'.format(cpu_predictor))
|
||||||
|
|
||||||
@@ -380,19 +363,17 @@ def main():
|
|||||||
client
|
client
|
||||||
print(client.ncores())
|
print(client.ncores())
|
||||||
|
|
||||||
# to download data for this notebook, visit https://rapidsai.github.io/demos/datasets/mortgage-data and update the following paths accordingly
|
# to download data for this notebook, visit https://rapidsai.github.io/demos/datasets/mortgage-data and update the following paths accordingly
|
||||||
acq_data_path = "{0}/acq".format(data_dir) #"/rapids/data/mortgage/acq"
|
acq_data_path = "{0}/acq".format(data_dir) #"/rapids/data/mortgage/acq"
|
||||||
perf_data_path = "{0}/perf".format(data_dir) #"/rapids/data/mortgage/perf"
|
perf_data_path = "{0}/perf".format(data_dir) #"/rapids/data/mortgage/perf"
|
||||||
col_names_path = "{0}/names.csv".format(data_dir) # "/rapids/data/mortgage/names.csv"
|
col_names_path = "{0}/names.csv".format(data_dir) # "/rapids/data/mortgage/names.csv"
|
||||||
start_year = 2000
|
start_year = 2000
|
||||||
#end_year = 2000 # end_year is inclusive -- converted to parameter
|
|
||||||
#part_count = 2 # the number of data files to train against -- converted to parameter
|
|
||||||
|
|
||||||
client.run(initialize_rmm_pool)
|
|
||||||
client
|
client
|
||||||
print(client.ncores())
|
print('--->>> Workers used: {0}'.format(client.ncores()))
|
||||||
# NOTE: The ETL calculates additional features which are then dropped before creating the XGBoost DMatrix.
|
|
||||||
# This can be optimized to avoid calculating the dropped features.
|
# NOTE: The ETL calculates additional features which are then dropped before creating the XGBoost DMatrix.
|
||||||
|
# This can be optimized to avoid calculating the dropped features.
|
||||||
print("Reading ...")
|
print("Reading ...")
|
||||||
t1 = datetime.datetime.now()
|
t1 = datetime.datetime.now()
|
||||||
gpu_dfs = []
|
gpu_dfs = []
|
||||||
@@ -414,14 +395,9 @@ def main():
|
|||||||
|
|
||||||
wait(gpu_dfs)
|
wait(gpu_dfs)
|
||||||
t2 = datetime.datetime.now()
|
t2 = datetime.datetime.now()
|
||||||
print("Reading time ...")
|
print("Reading time: {0}".format(str(t2-t1)))
|
||||||
print(t2-t1)
|
print('--->>> Number of data parts: {0}'.format(len(gpu_dfs)))
|
||||||
print('len(gpu_dfs) is {0}'.format(len(gpu_dfs)))
|
|
||||||
|
|
||||||
client.run(cudf._gdf.rmm_finalize)
|
|
||||||
client.run(initialize_rmm_no_pool)
|
|
||||||
client
|
|
||||||
print(client.ncores())
|
|
||||||
dxgb_gpu_params = {
|
dxgb_gpu_params = {
|
||||||
'nround': 100,
|
'nround': 100,
|
||||||
'max_depth': 8,
|
'max_depth': 8,
|
||||||
@@ -438,7 +414,7 @@ def main():
|
|||||||
'n_gpus': 1,
|
'n_gpus': 1,
|
||||||
'distributed_dask': True,
|
'distributed_dask': True,
|
||||||
'loss': 'ls',
|
'loss': 'ls',
|
||||||
'objective': 'gpu:reg:linear',
|
'objective': 'reg:squarederror',
|
||||||
'max_features': 'auto',
|
'max_features': 'auto',
|
||||||
'criterion': 'friedman_mse',
|
'criterion': 'friedman_mse',
|
||||||
'grow_policy': 'lossguide',
|
'grow_policy': 'lossguide',
|
||||||
@@ -446,13 +422,13 @@ def main():
|
|||||||
}
|
}
|
||||||
|
|
||||||
if cpu_predictor:
|
if cpu_predictor:
|
||||||
print('Training using CPUs')
|
print('\n---->>>> Training using CPUs <<<<----\n')
|
||||||
dxgb_gpu_params['predictor'] = 'cpu_predictor'
|
dxgb_gpu_params['predictor'] = 'cpu_predictor'
|
||||||
dxgb_gpu_params['tree_method'] = 'hist'
|
dxgb_gpu_params['tree_method'] = 'hist'
|
||||||
dxgb_gpu_params['objective'] = 'reg:linear'
|
dxgb_gpu_params['objective'] = 'reg:linear'
|
||||||
|
|
||||||
else:
|
else:
|
||||||
print('Training using GPUs')
|
print('\n---->>>> Training using GPUs <<<<----\n')
|
||||||
|
|
||||||
print('Training parameters are {0}'.format(dxgb_gpu_params))
|
print('Training parameters are {0}'.format(dxgb_gpu_params))
|
||||||
|
|
||||||
@@ -481,14 +457,13 @@ def main():
|
|||||||
gpu_dfs = [gpu_df.persist() for gpu_df in gpu_dfs]
|
gpu_dfs = [gpu_df.persist() for gpu_df in gpu_dfs]
|
||||||
gc.collect()
|
gc.collect()
|
||||||
wait(gpu_dfs)
|
wait(gpu_dfs)
|
||||||
|
|
||||||
|
# TRAIN THE MODEL
|
||||||
labels = None
|
labels = None
|
||||||
t1 = datetime.datetime.now()
|
t1 = datetime.datetime.now()
|
||||||
bst = dxgb_gpu.train(client, dxgb_gpu_params, gpu_dfs, labels, num_boost_round=dxgb_gpu_params['nround'])
|
bst = dxgb_gpu.train(client, dxgb_gpu_params, gpu_dfs, labels, num_boost_round=dxgb_gpu_params['nround'])
|
||||||
t2 = datetime.datetime.now()
|
t2 = datetime.datetime.now()
|
||||||
print("Training time ...")
|
print('\n---->>>> Training time: {0} <<<<----\n'.format(str(t2-t1)))
|
||||||
print(t2-t1)
|
|
||||||
print('str(bst) is {0}'.format(str(bst)))
|
|
||||||
print('Exiting script')
|
print('Exiting script')
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
|||||||
@@ -1,35 +0,0 @@
|
|||||||
name: rapids
|
|
||||||
channels:
|
|
||||||
- nvidia
|
|
||||||
- numba
|
|
||||||
- conda-forge
|
|
||||||
- rapidsai
|
|
||||||
- defaults
|
|
||||||
- pytorch
|
|
||||||
|
|
||||||
dependencies:
|
|
||||||
- arrow-cpp=0.12.0
|
|
||||||
- bokeh
|
|
||||||
- cffi=1.11.5
|
|
||||||
- cmake=3.12
|
|
||||||
- cuda92
|
|
||||||
- cython==0.29
|
|
||||||
- dask=1.1.1
|
|
||||||
- distributed=1.25.3
|
|
||||||
- faiss-gpu=1.5.0
|
|
||||||
- numba=0.42
|
|
||||||
- numpy=1.15.4
|
|
||||||
- nvstrings
|
|
||||||
- pandas=0.23.4
|
|
||||||
- pyarrow=0.12.0
|
|
||||||
- scikit-learn
|
|
||||||
- scipy
|
|
||||||
- cudf
|
|
||||||
- cuml
|
|
||||||
- python=3.6.2
|
|
||||||
- jupyterlab
|
|
||||||
- pip:
|
|
||||||
- file:/rapids/xgboost/python-package/dist/xgboost-0.81-py3-none-any.whl
|
|
||||||
- git+https://github.com/rapidsai/dask-xgboost@dask-cudf
|
|
||||||
- git+https://github.com/rapidsai/dask-cudf@master
|
|
||||||
- git+https://github.com/rapidsai/dask-cuda@master
|
|
||||||
@@ -6,7 +6,7 @@ dependencies:
|
|||||||
- python>=3.5.2,<3.6.8
|
- python>=3.5.2,<3.6.8
|
||||||
- nb_conda
|
- nb_conda
|
||||||
- matplotlib==2.1.0
|
- matplotlib==2.1.0
|
||||||
- numpy>=1.11.0,<=1.16.2
|
- numpy>=1.16.0,<=1.16.2
|
||||||
- cython
|
- cython
|
||||||
- urllib3<1.24
|
- urllib3<1.24
|
||||||
- scipy>=1.0.0,<=1.1.0
|
- scipy>=1.0.0,<=1.1.0
|
||||||
@@ -14,6 +14,7 @@ dependencies:
|
|||||||
- pandas>=0.22.0,<=0.23.4
|
- pandas>=0.22.0,<=0.23.4
|
||||||
- py-xgboost<=0.80
|
- py-xgboost<=0.80
|
||||||
- pyarrow>=0.11.0
|
- pyarrow>=0.11.0
|
||||||
|
- conda-forge::fbprophet==0.5
|
||||||
|
|
||||||
- pip:
|
- pip:
|
||||||
# Required packages for AzureML execution, history, and data preparation.
|
# Required packages for AzureML execution, history, and data preparation.
|
||||||
@@ -21,5 +22,6 @@ dependencies:
|
|||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
- azureml-widgets
|
- azureml-widgets
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
|
- azureml-contrib-interpret
|
||||||
- pandas_ml
|
- pandas_ml
|
||||||
|
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ dependencies:
|
|||||||
- python>=3.5.2,<3.6.8
|
- python>=3.5.2,<3.6.8
|
||||||
- nb_conda
|
- nb_conda
|
||||||
- matplotlib==2.1.0
|
- matplotlib==2.1.0
|
||||||
- numpy>=1.11.0,<=1.16.2
|
- numpy>=1.16.0,<=1.16.2
|
||||||
- cython
|
- cython
|
||||||
- urllib3<1.24
|
- urllib3<1.24
|
||||||
- scipy>=1.0.0,<=1.1.0
|
- scipy>=1.0.0,<=1.1.0
|
||||||
@@ -15,6 +15,7 @@ dependencies:
|
|||||||
- pandas>=0.22.0,<0.23.0
|
- pandas>=0.22.0,<0.23.0
|
||||||
- py-xgboost<=0.80
|
- py-xgboost<=0.80
|
||||||
- pyarrow>=0.11.0
|
- pyarrow>=0.11.0
|
||||||
|
- conda-forge::fbprophet==0.5
|
||||||
|
|
||||||
- pip:
|
- pip:
|
||||||
# Required packages for AzureML execution, history, and data preparation.
|
# Required packages for AzureML execution, history, and data preparation.
|
||||||
@@ -22,5 +23,6 @@ dependencies:
|
|||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
- azureml-widgets
|
- azureml-widgets
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
|
- azureml-contrib-interpret
|
||||||
- pandas_ml
|
- pandas_ml
|
||||||
|
|
||||||
|
|||||||
@@ -92,8 +92,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# choose a name for experiment\n",
|
"# choose a name for experiment\n",
|
||||||
"experiment_name = 'automl-classification-bmarketing'\n",
|
"experiment_name = 'automl-classification-bmarketing'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/automl-classification-bankmarketing'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment=Experiment(ws, experiment_name)\n",
|
"experiment=Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -103,7 +101,6 @@
|
|||||||
"output['Workspace'] = ws.name\n",
|
"output['Workspace'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -164,20 +161,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Data\n",
|
"# Data\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here load the data in the get_data() script to be utilized in azure compute. To do this first load all the necessary libraries and dependencies to set up paths for the data and to create the conda_Run_config."
|
"Create a run configuration for the remote run."
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"if not os.path.isdir('data'):\n",
|
|
||||||
" os.mkdir('data')\n",
|
|
||||||
" \n",
|
|
||||||
"if not os.path.exists(project_folder):\n",
|
|
||||||
" os.makedirs(project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -207,7 +191,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Load Data\n",
|
"### Load Data\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here we create the script to be run in azure comput for loading the data, we load the bank marketing dataset into X_train and y_train. Next X_train and y_train is returned for training the model."
|
"Load the bank marketing dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -218,8 +202,6 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv\"\n",
|
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv\"\n",
|
||||||
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
|
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
|
||||||
"X_train = dataset.drop_columns(columns=['y'])\n",
|
|
||||||
"y_train = dataset.keep_columns(columns=['y'], validate=True)\n",
|
|
||||||
"dataset.take(5).to_pandas_dataframe()"
|
"dataset.take(5).to_pandas_dataframe()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -238,9 +220,8 @@
|
|||||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
"|**label_column_name**|The name of the label column.|\n",
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
||||||
]
|
]
|
||||||
@@ -263,10 +244,9 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||||
" debug_log = 'automl_errors.log',\n",
|
" debug_log = 'automl_errors.log',\n",
|
||||||
" path = project_folder,\n",
|
|
||||||
" run_configuration=conda_run_config,\n",
|
" run_configuration=conda_run_config,\n",
|
||||||
" X = X_train,\n",
|
" training_data = dataset,\n",
|
||||||
" y = y_train,\n",
|
" label_column_name = 'y',\n",
|
||||||
" **automl_settings\n",
|
" **automl_settings\n",
|
||||||
" )"
|
" )"
|
||||||
]
|
]
|
||||||
@@ -446,7 +426,7 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
|
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
|
||||||
" pip_packages=['azureml-train-automl'])\n",
|
" pip_packages=['azureml-defaults','azureml-train-automl'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"conda_env_file_name = 'myenv.yml'\n",
|
"conda_env_file_name = 'myenv.yml'\n",
|
||||||
"myenv.save_to_file('.', conda_env_file_name)"
|
"myenv.save_to_file('.', conda_env_file_name)"
|
||||||
@@ -483,45 +463,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a Container Image\n",
|
"### Deploy the model as a Web Service on Azure Container Instance"
|
||||||
"\n",
|
|
||||||
"Next use Azure Container Instances for deploying models as a web service for quickly deploying and validating your model\n",
|
|
||||||
"or when testing a model that is under development."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.image import Image, ContainerImage\n",
|
|
||||||
"\n",
|
|
||||||
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
|
|
||||||
" execution_script = script_file_name,\n",
|
|
||||||
" conda_file = conda_env_file_name,\n",
|
|
||||||
" tags = {'area': \"bmData\", 'type': \"automl_classification\"},\n",
|
|
||||||
" description = \"Image for automl classification sample\")\n",
|
|
||||||
"\n",
|
|
||||||
"image = Image.create(name = \"automlsampleimage\",\n",
|
|
||||||
" # this is the model object \n",
|
|
||||||
" models = [model],\n",
|
|
||||||
" image_config = image_config, \n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"\n",
|
|
||||||
"image.wait_for_creation(show_output = True)\n",
|
|
||||||
"\n",
|
|
||||||
"if image.creation_state == 'Failed':\n",
|
|
||||||
" print(\"Image build log at: \" + image.image_build_log_uri)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Deploy the Image as a Web Service on Azure Container Instance\n",
|
|
||||||
"\n",
|
|
||||||
"Deploy an image that contains the model and other assets needed by the service."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -530,28 +472,23 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"from azureml.core.model import InferenceConfig\n",
|
||||||
"from azureml.core.webservice import AciWebservice\n",
|
"from azureml.core.webservice import AciWebservice\n",
|
||||||
|
"from azureml.core.webservice import Webservice\n",
|
||||||
|
"from azureml.core.model import Model\n",
|
||||||
|
"\n",
|
||||||
|
"inference_config = InferenceConfig(runtime = \"python\", \n",
|
||||||
|
" entry_script = script_file_name,\n",
|
||||||
|
" conda_file = conda_env_file_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||||
" memory_gb = 1, \n",
|
" memory_gb = 1, \n",
|
||||||
" tags = {'area': \"bmData\", 'type': \"automl_classification\"}, \n",
|
" tags = {'area': \"bmData\", 'type': \"automl_classification\"}, \n",
|
||||||
" description = 'sample service for Automl Classification')"
|
" description = 'sample service for Automl Classification')\n",
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.webservice import Webservice\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"aci_service_name = 'automl-sample-bankmarketing'\n",
|
"aci_service_name = 'automl-sample-bankmarketing'\n",
|
||||||
"print(aci_service_name)\n",
|
"print(aci_service_name)\n",
|
||||||
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
|
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||||
" image = image,\n",
|
|
||||||
" name = aci_service_name,\n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"aci_service.wait_for_deployment(True)\n",
|
"aci_service.wait_for_deployment(True)\n",
|
||||||
"print(aci_service.state)"
|
"print(aci_service.state)"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ name: auto-ml-classification-bank-marketing
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
- azureml-defaults
|
- azureml-defaults
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
|
|||||||
@@ -92,8 +92,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# choose a name for experiment\n",
|
"# choose a name for experiment\n",
|
||||||
"experiment_name = 'automl-classification-ccard'\n",
|
"experiment_name = 'automl-classification-ccard'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/automl-classification-creditcard'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment=Experiment(ws, experiment_name)\n",
|
"experiment=Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -103,7 +101,6 @@
|
|||||||
"output['Workspace'] = ws.name\n",
|
"output['Workspace'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -164,20 +161,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Data\n",
|
"# Data\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here load the data in the get_data script to be utilized in azure compute. To do this, first load all the necessary libraries and dependencies to set up paths for the data and to create the conda_run_config."
|
"Create a run configuration for the remote run."
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"if not os.path.isdir('data'):\n",
|
|
||||||
" os.mkdir('data')\n",
|
|
||||||
" \n",
|
|
||||||
"if not os.path.exists(project_folder):\n",
|
|
||||||
" os.makedirs(project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -207,7 +191,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Load Data\n",
|
"### Load Data\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here create the script to be run in azure compute for loading the data, load the credit card dataset into cards and store the Class column (y) in the y variable and store the remaining data in the x variable. Next split the data using random_split and return X_train and y_train for training the model."
|
"Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -218,10 +202,10 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n",
|
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n",
|
||||||
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
|
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
|
||||||
"X = dataset.drop_columns(columns=['Class'])\n",
|
"training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n",
|
||||||
"y = dataset.keep_columns(columns=['Class'], validate=True)\n",
|
"label_column_name = 'Class'\n",
|
||||||
"X_train, X_test = X.random_split(percentage=0.8, seed=223)\n",
|
"X_test = validation_data.drop_columns(columns=[label_column_name])\n",
|
||||||
"y_train, y_test = y.random_split(percentage=0.8, seed=223)"
|
"y_test = validation_data.keep_columns(columns=[label_column_name], validate=True)\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -239,9 +223,8 @@
|
|||||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
"|**label_column_name**|The name of the label column.|\n",
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
||||||
]
|
]
|
||||||
@@ -270,11 +253,10 @@
|
|||||||
"}\n",
|
"}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||||
" debug_log = 'automl_errors_20190417.log',\n",
|
" debug_log = 'automl_errors.log',\n",
|
||||||
" path = project_folder,\n",
|
|
||||||
" run_configuration=conda_run_config,\n",
|
" run_configuration=conda_run_config,\n",
|
||||||
" X = X_train,\n",
|
" training_data = training_data,\n",
|
||||||
" y = y_train,\n",
|
" label_column_name = label_column_name,\n",
|
||||||
" **automl_settings\n",
|
" **automl_settings\n",
|
||||||
" )"
|
" )"
|
||||||
]
|
]
|
||||||
@@ -453,7 +435,7 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
|
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
|
||||||
" pip_packages=['azureml-train-automl'])\n",
|
" pip_packages=['azureml-defaults','azureml-train-automl'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"conda_env_file_name = 'myenv.yml'\n",
|
"conda_env_file_name = 'myenv.yml'\n",
|
||||||
"myenv.save_to_file('.', conda_env_file_name)"
|
"myenv.save_to_file('.', conda_env_file_name)"
|
||||||
@@ -490,45 +472,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a Container Image\n",
|
"### Deploy the model as a Web Service on Azure Container Instance"
|
||||||
"\n",
|
|
||||||
"Next use Azure Container Instances for deploying models as a web service for quickly deploying and validating your model\n",
|
|
||||||
"or when testing a model that is under development."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.image import Image, ContainerImage\n",
|
|
||||||
"\n",
|
|
||||||
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
|
|
||||||
" execution_script = script_file_name,\n",
|
|
||||||
" conda_file = conda_env_file_name,\n",
|
|
||||||
" tags = {'area': \"cards\", 'type': \"automl_classification\"},\n",
|
|
||||||
" description = \"Image for automl classification sample\")\n",
|
|
||||||
"\n",
|
|
||||||
"image = Image.create(name = \"automlsampleimage\",\n",
|
|
||||||
" # this is the model object \n",
|
|
||||||
" models = [model],\n",
|
|
||||||
" image_config = image_config, \n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"\n",
|
|
||||||
"image.wait_for_creation(show_output = True)\n",
|
|
||||||
"\n",
|
|
||||||
"if image.creation_state == 'Failed':\n",
|
|
||||||
" print(\"Image build log at: \" + image.image_build_log_uri)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Deploy the Image as a Web Service on Azure Container Instance\n",
|
|
||||||
"\n",
|
|
||||||
"Deploy an image that contains the model and other assets needed by the service."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -537,28 +481,23 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"from azureml.core.model import InferenceConfig\n",
|
||||||
"from azureml.core.webservice import AciWebservice\n",
|
"from azureml.core.webservice import AciWebservice\n",
|
||||||
|
"from azureml.core.webservice import Webservice\n",
|
||||||
|
"from azureml.core.model import Model\n",
|
||||||
|
"\n",
|
||||||
|
"inference_config = InferenceConfig(runtime = \"python\", \n",
|
||||||
|
" entry_script = script_file_name,\n",
|
||||||
|
" conda_file = conda_env_file_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||||
" memory_gb = 1, \n",
|
" memory_gb = 1, \n",
|
||||||
" tags = {'area': \"cards\", 'type': \"automl_classification\"}, \n",
|
" tags = {'area': \"cards\", 'type': \"automl_classification\"}, \n",
|
||||||
" description = 'sample service for Automl Classification')"
|
" description = 'sample service for Automl Classification')\n",
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.webservice import Webservice\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"aci_service_name = 'automl-sample-creditcard'\n",
|
"aci_service_name = 'automl-sample-creditcard'\n",
|
||||||
"print(aci_service_name)\n",
|
"print(aci_service_name)\n",
|
||||||
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
|
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||||
" image = image,\n",
|
|
||||||
" name = aci_service_name,\n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"aci_service.wait_for_deployment(True)\n",
|
"aci_service.wait_for_deployment(True)\n",
|
||||||
"print(aci_service.state)"
|
"print(aci_service.state)"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ name: auto-ml-classification-credit-card-fraud
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
- azureml-defaults
|
- azureml-defaults
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
|
|||||||
@@ -92,8 +92,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# choose a name for experiment\n",
|
"# choose a name for experiment\n",
|
||||||
"experiment_name = 'automl-classification-deployment'\n",
|
"experiment_name = 'automl-classification-deployment'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/automl-classification-deployment'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment=Experiment(ws, experiment_name)\n",
|
"experiment=Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -103,7 +101,6 @@
|
|||||||
"output['Workspace'] = ws.name\n",
|
"output['Workspace'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -126,8 +123,7 @@
|
|||||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|"
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -148,8 +144,7 @@
|
|||||||
" iterations = 10,\n",
|
" iterations = 10,\n",
|
||||||
" verbosity = logging.INFO,\n",
|
" verbosity = logging.INFO,\n",
|
||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train)"
|
||||||
" path = project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -310,7 +305,7 @@
|
|||||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||||
"\n",
|
"\n",
|
||||||
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
|
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost<=0.80'],\n",
|
||||||
" pip_packages=['azureml-train-automl'])\n",
|
" pip_packages=['azureml-defaults','azureml-train-automl'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"conda_env_file_name = 'myenv.yml'\n",
|
"conda_env_file_name = 'myenv.yml'\n",
|
||||||
"myenv.save_to_file('.', conda_env_file_name)"
|
"myenv.save_to_file('.', conda_env_file_name)"
|
||||||
@@ -347,40 +342,9 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a Container Image"
|
"### Deploy the model as a Web Service on Azure Container Instance\n",
|
||||||
]
|
"\n",
|
||||||
},
|
"Create the configuration needed for deploying the model as a web service service."
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.image import Image, ContainerImage\n",
|
|
||||||
"\n",
|
|
||||||
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
|
|
||||||
" execution_script = script_file_name,\n",
|
|
||||||
" conda_file = conda_env_file_name,\n",
|
|
||||||
" tags = {'area': \"digits\", 'type': \"automl_classification\"},\n",
|
|
||||||
" description = \"Image for automl classification sample\")\n",
|
|
||||||
"\n",
|
|
||||||
"image = Image.create(name = \"automlsampleimage\",\n",
|
|
||||||
" # this is the model object \n",
|
|
||||||
" models = [model],\n",
|
|
||||||
" image_config = image_config, \n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"\n",
|
|
||||||
"image.wait_for_creation(show_output = True)\n",
|
|
||||||
"\n",
|
|
||||||
"if image.creation_state == 'Failed':\n",
|
|
||||||
" print(\"Image build log at: \" + image.image_build_log_uri)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Deploy the Image as a Web Service on Azure Container Instance"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -389,8 +353,13 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"from azureml.core.model import InferenceConfig\n",
|
||||||
"from azureml.core.webservice import AciWebservice\n",
|
"from azureml.core.webservice import AciWebservice\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"inference_config = InferenceConfig(runtime = \"python\", \n",
|
||||||
|
" entry_script = script_file_name,\n",
|
||||||
|
" conda_file = conda_env_file_name)\n",
|
||||||
|
"\n",
|
||||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||||
" memory_gb = 1, \n",
|
" memory_gb = 1, \n",
|
||||||
" tags = {'area': \"digits\", 'type': \"automl_classification\"}, \n",
|
" tags = {'area': \"digits\", 'type': \"automl_classification\"}, \n",
|
||||||
@@ -404,17 +373,33 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.core.webservice import Webservice\n",
|
"from azureml.core.webservice import Webservice\n",
|
||||||
|
"from azureml.core.model import Model\n",
|
||||||
"\n",
|
"\n",
|
||||||
"aci_service_name = 'automl-sample-01'\n",
|
"aci_service_name = 'automl-sample-01'\n",
|
||||||
"print(aci_service_name)\n",
|
"print(aci_service_name)\n",
|
||||||
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
|
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||||
" image = image,\n",
|
|
||||||
" name = aci_service_name,\n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"aci_service.wait_for_deployment(True)\n",
|
"aci_service.wait_for_deployment(True)\n",
|
||||||
"print(aci_service.state)"
|
"print(aci_service.state)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Get the logs from service deployment"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"if aci_service.state != 'Healthy':\n",
|
||||||
|
" # run this command for debugging.\n",
|
||||||
|
" print(aci_service.get_logs())"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@@ -431,22 +416,6 @@
|
|||||||
"#aci_service.delete()"
|
"#aci_service.delete()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Get Logs from a Deployed Web Service"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"#aci_service.get_logs()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|||||||
@@ -89,9 +89,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the experiment and specify the project folder.\n",
|
"# Choose a name for the experiment.\n",
|
||||||
"experiment_name = 'automl-classification-onnx'\n",
|
"experiment_name = 'automl-classification-onnx'\n",
|
||||||
"project_folder = './sample_projects/automl-classification-onnx'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -101,7 +100,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -127,9 +125,7 @@
|
|||||||
"X_train, X_test, y_train, y_test = train_test_split(iris.data, \n",
|
"X_train, X_test, y_train, y_test = train_test_split(iris.data, \n",
|
||||||
" iris.target, \n",
|
" iris.target, \n",
|
||||||
" test_size=0.2, \n",
|
" test_size=0.2, \n",
|
||||||
" random_state=0)\n",
|
" random_state=0)"
|
||||||
"\n",
|
|
||||||
"\n"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -170,8 +166,7 @@
|
|||||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
||||||
"|**enable_onnx_compatible_models**|Enable the ONNX compatible models in the experiment.|\n",
|
"|**enable_onnx_compatible_models**|Enable the ONNX compatible models in the experiment.|"
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -196,8 +191,7 @@
|
|||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train,\n",
|
||||||
" preprocess=True,\n",
|
" preprocess=True,\n",
|
||||||
" enable_onnx_compatible_models=True,\n",
|
" enable_onnx_compatible_models=True)"
|
||||||
" path = project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -100,9 +100,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the experiment and specify the project folder.\n",
|
"# Choose a name for the experiment.\n",
|
||||||
"experiment_name = 'automl-local-whitelist'\n",
|
"experiment_name = 'automl-local-whitelist'\n",
|
||||||
"project_folder = './sample_projects/automl-local-whitelist'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -112,7 +111,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -158,7 +156,6 @@
|
|||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
|
|
||||||
"|**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#configure-your-experiment-settings).|"
|
"|**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#configure-your-experiment-settings).|"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -177,8 +174,7 @@
|
|||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train,\n",
|
||||||
" enable_tf=True,\n",
|
" enable_tf=True,\n",
|
||||||
" whitelist_models=whitelist_models,\n",
|
" whitelist_models=whitelist_models)"
|
||||||
" path = project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -113,9 +113,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the experiment and specify the project folder.\n",
|
"# Choose a name for the experiment.\n",
|
||||||
"experiment_name = 'automl-classification'\n",
|
"experiment_name = 'automl-classification'\n",
|
||||||
"project_folder = './sample_projects/automl-classification'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -125,7 +124,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
|||||||
@@ -87,8 +87,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# choose a name for experiment\n",
|
"# choose a name for experiment\n",
|
||||||
"experiment_name = 'automl-dataset-remote-bai'\n",
|
"experiment_name = 'automl-dataset-remote-bai'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/automl-dataprep-remote-bai'\n",
|
|
||||||
" \n",
|
" \n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
" \n",
|
" \n",
|
||||||
@@ -98,7 +96,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -141,8 +138,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"X = dataset.drop_columns(columns=['Primary Type', 'FBI Code'])\n",
|
"training_data = dataset.drop_columns(columns=['FBI Code'])\n",
|
||||||
"y = dataset.keep_columns(columns=['Primary Type'], validate=True)"
|
"label_column_name = 'Primary Type'"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -253,10 +250,9 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||||
" debug_log = 'automl_errors.log',\n",
|
" debug_log = 'automl_errors.log',\n",
|
||||||
" path = project_folder,\n",
|
|
||||||
" run_configuration=conda_run_config,\n",
|
" run_configuration=conda_run_config,\n",
|
||||||
" X = X,\n",
|
" training_data = training_data,\n",
|
||||||
" y = y,\n",
|
" label_column_name = label_column_name,\n",
|
||||||
" **automl_settings)"
|
" **automl_settings)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ name: auto-ml-dataset-remote-execution
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
- azureml-defaults
|
- azureml-defaults
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
|
|||||||
@@ -87,8 +87,6 @@
|
|||||||
" \n",
|
" \n",
|
||||||
"# choose a name for experiment\n",
|
"# choose a name for experiment\n",
|
||||||
"experiment_name = 'automl-dataset-local'\n",
|
"experiment_name = 'automl-dataset-local'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/automl-dataset-local'\n",
|
|
||||||
" \n",
|
" \n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
" \n",
|
" \n",
|
||||||
@@ -98,7 +96,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -141,8 +138,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"X = dataset.drop_columns(columns=['Primary Type', 'FBI Code'])\n",
|
"training_data = dataset.drop_columns(columns=['FBI Code'])\n",
|
||||||
"y = dataset.keep_columns(columns=['Primary Type'], validate=True)"
|
"label_column_name = 'Primary Type'"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -186,8 +183,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||||
" debug_log = 'automl_errors.log',\n",
|
" debug_log = 'automl_errors.log',\n",
|
||||||
" X = X,\n",
|
" training_data = training_data,\n",
|
||||||
" y = y,\n",
|
" label_column_name = label_column_name,\n",
|
||||||
" **automl_settings)"
|
" **automl_settings)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -6,3 +6,4 @@ dependencies:
|
|||||||
- azureml-widgets
|
- azureml-widgets
|
||||||
- matplotlib
|
- matplotlib
|
||||||
- pandas_ml
|
- pandas_ml
|
||||||
|
- azureml-dataprep[pandas]
|
||||||
|
|||||||
@@ -97,8 +97,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# choose a name for the run history container in the workspace\n",
|
"# choose a name for the run history container in the workspace\n",
|
||||||
"experiment_name = 'automl-bikeshareforecasting'\n",
|
"experiment_name = 'automl-bikeshareforecasting'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/automl-local-bikeshareforecasting'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -108,7 +106,6 @@
|
|||||||
"output['Workspace'] = ws.name\n",
|
"output['Workspace'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Run History Name'] = experiment_name\n",
|
"output['Run History Name'] = experiment_name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -221,11 +218,12 @@
|
|||||||
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
|
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
|
||||||
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
|
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
|
||||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
"|**label_column_name**|The name of the label column.|\n",
|
||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**country_or_region**|The country/region used to generate holiday features. These should be ISO 3166 two-letter country/region codes (i.e. 'US', 'GB').|\n",
|
"|**country_or_region**|The country/region used to generate holiday features. These should be ISO 3166 two-letter country/region codes (i.e. 'US', 'GB').|\n",
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
|
"\n",
|
||||||
|
"This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -246,12 +244,12 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"automl_config = AutoMLConfig(task='forecasting', \n",
|
"automl_config = AutoMLConfig(task='forecasting', \n",
|
||||||
" primary_metric='normalized_root_mean_squared_error',\n",
|
" primary_metric='normalized_root_mean_squared_error',\n",
|
||||||
|
" blacklist_models = ['ExtremeRandomTrees'],\n",
|
||||||
" iterations=10,\n",
|
" iterations=10,\n",
|
||||||
" iteration_timeout_minutes=5,\n",
|
" iteration_timeout_minutes=5,\n",
|
||||||
" X=X_train,\n",
|
" training_data=train,\n",
|
||||||
" y=y_train,\n",
|
" label_column_name=target_column_name,\n",
|
||||||
" n_cross_validations=3, \n",
|
" n_cross_validations=3, \n",
|
||||||
" path=project_folder,\n",
|
|
||||||
" verbosity=logging.INFO,\n",
|
" verbosity=logging.INFO,\n",
|
||||||
" **automl_settings)"
|
" **automl_settings)"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -93,8 +93,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# choose a name for the run history container in the workspace\n",
|
"# choose a name for the run history container in the workspace\n",
|
||||||
"experiment_name = 'automl-energydemandforecasting'\n",
|
"experiment_name = 'automl-energydemandforecasting'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/automl-local-energydemandforecasting'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -104,7 +102,6 @@
|
|||||||
"output['Workspace'] = ws.name\n",
|
"output['Workspace'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Run History Name'] = experiment_name\n",
|
"output['Run History Name'] = experiment_name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -213,8 +210,7 @@
|
|||||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
||||||
"|**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.|\n",
|
"|**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.|"
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -231,13 +227,12 @@
|
|||||||
"automl_config = AutoMLConfig(task='forecasting',\n",
|
"automl_config = AutoMLConfig(task='forecasting',\n",
|
||||||
" debug_log='automl_nyc_energy_errors.log',\n",
|
" debug_log='automl_nyc_energy_errors.log',\n",
|
||||||
" primary_metric='normalized_root_mean_squared_error',\n",
|
" primary_metric='normalized_root_mean_squared_error',\n",
|
||||||
" blacklist_models = ['ExtremeRandomTrees'],\n",
|
" blacklist_models = ['ExtremeRandomTrees', 'AutoArima'],\n",
|
||||||
" iterations=10,\n",
|
" iterations=10,\n",
|
||||||
" iteration_timeout_minutes=5,\n",
|
" iteration_timeout_minutes=5,\n",
|
||||||
" X=X_train,\n",
|
" X=X_train,\n",
|
||||||
" y=y_train,\n",
|
" y=y_train,\n",
|
||||||
" n_cross_validations=3,\n",
|
" n_cross_validations=3,\n",
|
||||||
" path=project_folder,\n",
|
|
||||||
" verbosity = logging.INFO,\n",
|
" verbosity = logging.INFO,\n",
|
||||||
" **time_series_settings)"
|
" **time_series_settings)"
|
||||||
]
|
]
|
||||||
@@ -463,7 +458,9 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.\n",
|
"We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features."
|
"Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -482,13 +479,12 @@
|
|||||||
"automl_config_lags = AutoMLConfig(task='forecasting',\n",
|
"automl_config_lags = AutoMLConfig(task='forecasting',\n",
|
||||||
" debug_log='automl_nyc_energy_errors.log',\n",
|
" debug_log='automl_nyc_energy_errors.log',\n",
|
||||||
" primary_metric='normalized_root_mean_squared_error',\n",
|
" primary_metric='normalized_root_mean_squared_error',\n",
|
||||||
" blacklist_models=['ElasticNet','ExtremeRandomTrees','GradientBoosting'],\n",
|
" blacklist_models=['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor'],\n",
|
||||||
" iterations=10,\n",
|
" iterations=10,\n",
|
||||||
" iteration_timeout_minutes=10,\n",
|
" iteration_timeout_minutes=10,\n",
|
||||||
" X=X_train,\n",
|
" X=X_train,\n",
|
||||||
" y=y_train,\n",
|
" y=y_train,\n",
|
||||||
" n_cross_validations=3,\n",
|
" n_cross_validations=3,\n",
|
||||||
" path=project_folder,\n",
|
|
||||||
" verbosity=logging.INFO,\n",
|
" verbosity=logging.INFO,\n",
|
||||||
" **time_series_settings_with_lags)"
|
" **time_series_settings_with_lags)"
|
||||||
]
|
]
|
||||||
@@ -556,7 +552,21 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### What features matter for the forecast?"
|
"### What features matter for the forecast?\n",
|
||||||
|
"The following steps will allow you to compute and visualize engineered feature importance based on your test data for forecasting. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Setup the model explanations for AutoML models\n",
|
||||||
|
"The *fitted_model* can generate the following which will be used for getting the engineered and raw feature explanations using *automl_setup_model_explanations*:-\n",
|
||||||
|
"1. Featurized data from train samples/test samples \n",
|
||||||
|
"2. Gather engineered and raw feature name lists\n",
|
||||||
|
"3. Find the classes in your labeled column in classification scenarios\n",
|
||||||
|
"\n",
|
||||||
|
"The *automl_explainer_setup_obj* contains all the structures from above list. "
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -565,14 +575,74 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.train.automl.automlexplainer import explain_model\n",
|
"from azureml.train.automl.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations\n",
|
||||||
"\n",
|
"automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train.copy(), \n",
|
||||||
"# feature names are everything in the transformed data except the target\n",
|
" X_test=X_test.copy(), y=y_train, \n",
|
||||||
"features = X_trans_lags.columns[:-1]\n",
|
" task='forecasting')"
|
||||||
"expl = explain_model(fitted_model_lags, X_train.copy(), X_test.copy(), features=features, best_run=best_run_lags, y_train=y_train)\n",
|
]
|
||||||
"# unpack the tuple\n",
|
},
|
||||||
"shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl\n",
|
{
|
||||||
"best_run_lags"
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Initialize the Mimic Explainer for feature importance\n",
|
||||||
|
"For explaining the AutoML models, use the *MimicWrapper* from *azureml.explain.model* package. The *MimicWrapper* can be initialized with fields in *automl_explainer_setup_obj*, your workspace and a LightGBM model which acts as a surrogate model to explain the AutoML model (*fitted_model* here). The *MimicWrapper* also takes the *best_run* object where the raw and engineered explanations will be uploaded."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel\n",
|
||||||
|
"from azureml.explain.model.mimic_wrapper import MimicWrapper\n",
|
||||||
|
"explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, \n",
|
||||||
|
" init_dataset=automl_explainer_setup_obj.X_transform, run=best_run,\n",
|
||||||
|
" features=automl_explainer_setup_obj.engineered_feature_names, \n",
|
||||||
|
" feature_maps=[automl_explainer_setup_obj.feature_map])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Use Mimic Explainer for computing and visualizing engineered feature importance\n",
|
||||||
|
"The *explain()* method in *MimicWrapper* can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the generated engineered features by AutoML featurizers."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform)\n",
|
||||||
|
"print(engineered_explanations.get_feature_importance_dict())\n",
|
||||||
|
"from azureml.contrib.interpret.visualize import ExplanationDashboard\n",
|
||||||
|
"ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, automl_explainer_setup_obj.X_test_transform)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Use Mimic Explainer for computing and visualizing raw feature importance\n",
|
||||||
|
"The *explain()* method in *MimicWrapper* can be again called with the transformed test samples and setting *get_raw* to *True* to get the feature importance for the raw features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the raw features."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"raw_explanations = explainer.explain(['local', 'global'], get_raw=True, \n",
|
||||||
|
" raw_feature_names=automl_explainer_setup_obj.raw_feature_names,\n",
|
||||||
|
" eval_dataset=automl_explainer_setup_obj.X_test_transform)\n",
|
||||||
|
"print(raw_explanations.get_feature_importance_dict())\n",
|
||||||
|
"from azureml.contrib.interpret.visualize import ExplanationDashboard\n",
|
||||||
|
"ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, automl_explainer_setup_obj.X_test_raw)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -2,9 +2,11 @@ name: auto-ml-forecasting-energy-demand
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
- azureml-widgets
|
- azureml-widgets
|
||||||
- matplotlib
|
- matplotlib
|
||||||
- pandas_ml
|
- pandas_ml
|
||||||
- statsmodels
|
- statsmodels
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
|
- azureml-contrib-interpret
|
||||||
|
|||||||
@@ -0,0 +1,615 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Automated Machine Learning\n",
|
||||||
|
"\n",
|
||||||
|
"## Forecasting away from training data\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook demonstrates the full interface to the `forecast()` function. \n",
|
||||||
|
"\n",
|
||||||
|
"The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. \n",
|
||||||
|
"\n",
|
||||||
|
"However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.\n",
|
||||||
|
"\n",
|
||||||
|
"Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.\n",
|
||||||
|
"\n",
|
||||||
|
"Terminology:\n",
|
||||||
|
"* forecast origin: the last period when the target value is known\n",
|
||||||
|
"* forecast periods(s): the period(s) for which the value of the target is desired.\n",
|
||||||
|
"* forecast horizon: the number of forecast periods\n",
|
||||||
|
"* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.\n",
|
||||||
|
"* prediction context: `lookback` periods immediately preceding the forecast origin\n",
|
||||||
|
"\n",
|
||||||
|
""
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Setup"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Please make sure you have followed the `configuration.ipynb` notebook so that your ML workspace information is saved in the config file."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import pandas as pd\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import logging\n",
|
||||||
|
"import warnings\n",
|
||||||
|
"\n",
|
||||||
|
"from pandas.tseries.frequencies import to_offset\n",
|
||||||
|
"\n",
|
||||||
|
"# Squash warning messages for cleaner output in the notebook\n",
|
||||||
|
"warnings.showwarning = lambda *args, **kwargs: None\n",
|
||||||
|
"\n",
|
||||||
|
"np.set_printoptions(precision=4, suppress=True, linewidth=120)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import azureml.core\n",
|
||||||
|
"from azureml.core.workspace import Workspace\n",
|
||||||
|
"from azureml.core.experiment import Experiment\n",
|
||||||
|
"from azureml.train.automl import AutoMLConfig\n",
|
||||||
|
"\n",
|
||||||
|
"ws = Workspace.from_config()\n",
|
||||||
|
"\n",
|
||||||
|
"# choose a name for the run history container in the workspace\n",
|
||||||
|
"experiment_name = 'automl-forecast-function-demo'\n",
|
||||||
|
"\n",
|
||||||
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
|
"\n",
|
||||||
|
"output = {}\n",
|
||||||
|
"output['SDK version'] = azureml.core.VERSION\n",
|
||||||
|
"output['Subscription ID'] = ws.subscription_id\n",
|
||||||
|
"output['Workspace'] = ws.name\n",
|
||||||
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
|
"output['Location'] = ws.location\n",
|
||||||
|
"output['Run History Name'] = experiment_name\n",
|
||||||
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Data\n",
|
||||||
|
"For the demonstration purposes we will generate the data artificially and use them for the forecasting."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"TIME_COLUMN_NAME = 'date'\n",
|
||||||
|
"GRAIN_COLUMN_NAME = 'grain'\n",
|
||||||
|
"TARGET_COLUMN_NAME = 'y'\n",
|
||||||
|
"\n",
|
||||||
|
"def get_timeseries(train_len: int,\n",
|
||||||
|
" test_len: int,\n",
|
||||||
|
" time_column_name: str,\n",
|
||||||
|
" target_column_name: str,\n",
|
||||||
|
" grain_column_name: str,\n",
|
||||||
|
" grains: int = 1,\n",
|
||||||
|
" freq: str = 'H'):\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
" Return the time series of designed length.\n",
|
||||||
|
"\n",
|
||||||
|
" :param train_len: The length of training data (one series).\n",
|
||||||
|
" :type train_len: int\n",
|
||||||
|
" :param test_len: The length of testing data (one series).\n",
|
||||||
|
" :type test_len: int\n",
|
||||||
|
" :param time_column_name: The desired name of a time column.\n",
|
||||||
|
" :type time_column_name: str\n",
|
||||||
|
" :param\n",
|
||||||
|
" :param grains: The number of grains.\n",
|
||||||
|
" :type grains: int\n",
|
||||||
|
" :param freq: The frequency string representing pandas offset.\n",
|
||||||
|
" see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n",
|
||||||
|
" :type freq: str\n",
|
||||||
|
" :returns: the tuple of train and test data sets.\n",
|
||||||
|
" :rtype: tuple\n",
|
||||||
|
"\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
" data_train = [] # type: List[pd.DataFrame]\n",
|
||||||
|
" data_test = [] # type: List[pd.DataFrame]\n",
|
||||||
|
" data_length = train_len + test_len\n",
|
||||||
|
" for i in range(grains):\n",
|
||||||
|
" X = pd.DataFrame({\n",
|
||||||
|
" time_column_name: pd.date_range(start='2000-01-01',\n",
|
||||||
|
" periods=data_length,\n",
|
||||||
|
" freq=freq),\n",
|
||||||
|
" target_column_name: np.arange(data_length).astype(float) + np.random.rand(data_length) + i*5,\n",
|
||||||
|
" 'ext_predictor': np.asarray(range(42, 42 + data_length)),\n",
|
||||||
|
" grain_column_name: np.repeat('g{}'.format(i), data_length)\n",
|
||||||
|
" })\n",
|
||||||
|
" data_train.append(X[:train_len])\n",
|
||||||
|
" data_test.append(X[train_len:])\n",
|
||||||
|
" X_train = pd.concat(data_train)\n",
|
||||||
|
" y_train = X_train.pop(target_column_name).values\n",
|
||||||
|
" X_test = pd.concat(data_test)\n",
|
||||||
|
" y_test = X_test.pop(target_column_name).values\n",
|
||||||
|
" return X_train, y_train, X_test, y_test\n",
|
||||||
|
"\n",
|
||||||
|
"n_test_periods = 6\n",
|
||||||
|
"n_train_periods = 30\n",
|
||||||
|
"X_train, y_train, X_test, y_test = get_timeseries(train_len=n_train_periods,\n",
|
||||||
|
" test_len=n_test_periods,\n",
|
||||||
|
" time_column_name=TIME_COLUMN_NAME,\n",
|
||||||
|
" target_column_name=TARGET_COLUMN_NAME,\n",
|
||||||
|
" grain_column_name=GRAIN_COLUMN_NAME,\n",
|
||||||
|
" grains=2)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Let's see what the training data looks like."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"X_train.tail()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# plot the example time series\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"whole_data = X_train.copy()\n",
|
||||||
|
"whole_data['y'] = y_train\n",
|
||||||
|
"for g in whole_data.groupby('grain'): \n",
|
||||||
|
" plt.plot(g[1]['date'].values, g[1]['y'].values, label=g[0])\n",
|
||||||
|
"plt.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Create the configuration and train a forecaster\n",
|
||||||
|
"First generate the configuration, in which we:\n",
|
||||||
|
"* Set metadata columns: target, time column and grain column names.\n",
|
||||||
|
"* Ask for 10 iterations through models, last of which will represent the Ensemble of previous ones.\n",
|
||||||
|
"* Validate our data using cross validation with rolling window method.\n",
|
||||||
|
"* Set normalized root mean squared error as a metric to select the best model.\n",
|
||||||
|
"\n",
|
||||||
|
"* Finally, we set the task to be forecasting.\n",
|
||||||
|
"* By default, we apply the lag lead operator and rolling window to the target value i.e. we use the previous values as a predictor for the future ones."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"lags = [1,2,3]\n",
|
||||||
|
"rolling_window_length = 0 # don't do rolling windows\n",
|
||||||
|
"max_horizon = n_test_periods\n",
|
||||||
|
"time_series_settings = { \n",
|
||||||
|
" 'time_column_name': TIME_COLUMN_NAME,\n",
|
||||||
|
" 'grain_column_names': [ GRAIN_COLUMN_NAME ],\n",
|
||||||
|
" 'max_horizon': max_horizon,\n",
|
||||||
|
" 'target_lags': lags\n",
|
||||||
|
"}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Run the model selection and training process."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.workspace import Workspace\n",
|
||||||
|
"from azureml.core.experiment import Experiment\n",
|
||||||
|
"from azureml.train.automl import AutoMLConfig\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"automl_config = AutoMLConfig(task='forecasting',\n",
|
||||||
|
" debug_log='automl_forecasting_function.log',\n",
|
||||||
|
" primary_metric='normalized_root_mean_squared_error', \n",
|
||||||
|
" iterations=10, \n",
|
||||||
|
" X=X_train,\n",
|
||||||
|
" y=y_train,\n",
|
||||||
|
" n_cross_validations=3,\n",
|
||||||
|
" verbosity = logging.INFO,\n",
|
||||||
|
" **time_series_settings)\n",
|
||||||
|
"\n",
|
||||||
|
"local_run = experiment.submit(automl_config, show_output=True)\n",
|
||||||
|
"\n",
|
||||||
|
"# Retrieve the best model to use it further.\n",
|
||||||
|
"_, fitted_model = local_run.get_output()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Forecasting from the trained model"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### X_train is directly followed by the X_test\n",
|
||||||
|
"\n",
|
||||||
|
"Let's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"The `X_test` and `y_query` below, taken together, form the **forecast request**. The two are interpreted as aligned - `y_query` could actally be a column in `X_test`. `NaN`s in `y_query` are the question marks. These will be filled with the forecasts.\n",
|
||||||
|
"\n",
|
||||||
|
"When the forecast period immediately follows the training period, the models retain the last few points of data. You can simply fill `y_query` filled with question marks - the model has the data for the lookback already.\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Typical path: X_test is known, forecast all upcoming periods"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00\n",
|
||||||
|
"\n",
|
||||||
|
"# These are predictions we are asking the model to make (does not contain thet target column y),\n",
|
||||||
|
"# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data\n",
|
||||||
|
"X_test"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"y_query = np.repeat(np.NaN, X_test.shape[0])\n",
|
||||||
|
"y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test, y_query)\n",
|
||||||
|
"\n",
|
||||||
|
"# xy_nogap contains the predictions in the _automl_target_col column.\n",
|
||||||
|
"# Those same numbers are output in y_pred_no_gap\n",
|
||||||
|
"xy_nogap"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Distribution forecasts\n",
|
||||||
|
"\n",
|
||||||
|
"Often the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. \n",
|
||||||
|
"This arises when the forecast is used to control some kind of inventory, for example of grocery items of virtual machines for a cloud service. In such case, the control point is usually something like \"we want the item to be in stock and not run out 99% of the time\". This is called a \"service level\". Here is how you get quantile forecasts."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# specify which quantiles you would like \n",
|
||||||
|
"fitted_model.quantiles = [0.01, 0.5, 0.95]\n",
|
||||||
|
"# use forecast_quantiles function, not the forecast() one\n",
|
||||||
|
"y_pred_quantiles = fitted_model.forecast_quantiles(X_test, y_query)\n",
|
||||||
|
"\n",
|
||||||
|
"# it all nicely aligns column-wise\n",
|
||||||
|
"pd.concat([X_test.reset_index(), pd.DataFrame({'query' : y_query}), y_pred_quantiles], axis=1)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Destination-date forecast: \"just do something\"\n",
|
||||||
|
"\n",
|
||||||
|
"In some scenarios, the X_test is not known. The forecast is likely to be weak, becaus eit is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to \"destination date\". The destination date still needs to fit within the maximum horizon from training."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# We will take the destination date as a last date in the test set.\n",
|
||||||
|
"dest = max(X_test[TIME_COLUMN_NAME])\n",
|
||||||
|
"y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)\n",
|
||||||
|
"\n",
|
||||||
|
"# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)\n",
|
||||||
|
"xy_dest"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Forecasting away from training data\n",
|
||||||
|
"\n",
|
||||||
|
"Suppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model \"looks back\" -- uses previous values of the target -- then we somehow need to provide those values to the model.\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per grain, so each grain can have a different forecast origin. \n",
|
||||||
|
"\n",
|
||||||
|
"The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# generate the same kind of test data we trained on, \n",
|
||||||
|
"# but now make the train set much longer, so that the test set will be in the future\n",
|
||||||
|
"X_context, y_context, X_away, y_away = get_timeseries(train_len=42, # train data was 30 steps long\n",
|
||||||
|
" test_len=4,\n",
|
||||||
|
" time_column_name=TIME_COLUMN_NAME,\n",
|
||||||
|
" target_column_name=TARGET_COLUMN_NAME,\n",
|
||||||
|
" grain_column_name=GRAIN_COLUMN_NAME,\n",
|
||||||
|
" grains=2)\n",
|
||||||
|
"\n",
|
||||||
|
"# end of the data we trained on\n",
|
||||||
|
"print(X_train.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].max())\n",
|
||||||
|
"# start of the data we want to predict on\n",
|
||||||
|
"print(X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].min())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"try: \n",
|
||||||
|
" y_query = y_away.copy()\n",
|
||||||
|
" y_query.fill(np.NaN)\n",
|
||||||
|
" y_pred_away, xy_away = fitted_model.forecast(X_away, y_query)\n",
|
||||||
|
" xy_away\n",
|
||||||
|
"except Exception as e:\n",
|
||||||
|
" print(e)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"How should we read that eror message? The forecast origin is at the last time themodel saw an actual values of `y` (the target). That was at the end of the training data! Because the model received all `NaN` (and not an actual target value), it is attempting to forecast from the end of training data. But the requested forecast periods are past the maximum horizon. We need to provide a define `y` value to establish the forecast origin.\n",
|
||||||
|
"\n",
|
||||||
|
"We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def make_forecasting_query(fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback):\n",
|
||||||
|
"\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
" This function will take the full dataset, and create the query\n",
|
||||||
|
" to predict all values of the grain from the `forecast_origin`\n",
|
||||||
|
" forward for the next `horizon` horizons. Context from previous\n",
|
||||||
|
" `lookback` periods will be included.\n",
|
||||||
|
"\n",
|
||||||
|
" \n",
|
||||||
|
"\n",
|
||||||
|
" fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.\n",
|
||||||
|
" time_column_name: string which column (must be in fulldata) is the time axis\n",
|
||||||
|
" target_column_name: string which column (must be in fulldata) is to be forecast\n",
|
||||||
|
" forecast_origin: datetime type the last time we (pretend to) have target values \n",
|
||||||
|
" horizon: timedelta how far forward, in time units (not periods)\n",
|
||||||
|
" lookback: timedelta how far back does the model look?\n",
|
||||||
|
"\n",
|
||||||
|
" Example:\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" ```\n",
|
||||||
|
"\n",
|
||||||
|
" forecast_origin = pd.to_datetime('2012-09-01') + pd.DateOffset(days=5) # forecast 5 days after end of training\n",
|
||||||
|
" print(forecast_origin)\n",
|
||||||
|
"\n",
|
||||||
|
" X_query, y_query = make_forecasting_query(data, \n",
|
||||||
|
" forecast_origin = forecast_origin,\n",
|
||||||
|
" horizon = pd.DateOffset(days=7), # 7 days into the future\n",
|
||||||
|
" lookback = pd.DateOffset(days=1), # model has lag 1 period (day)\n",
|
||||||
|
" )\n",
|
||||||
|
"\n",
|
||||||
|
" ```\n",
|
||||||
|
" \"\"\"\n",
|
||||||
|
"\n",
|
||||||
|
" X_past = fulldata[ (fulldata[ time_column_name ] > forecast_origin - lookback) &\n",
|
||||||
|
" (fulldata[ time_column_name ] <= forecast_origin)\n",
|
||||||
|
" ]\n",
|
||||||
|
"\n",
|
||||||
|
" X_future = fulldata[ (fulldata[ time_column_name ] > forecast_origin) &\n",
|
||||||
|
" (fulldata[ time_column_name ] <= forecast_origin + horizon)\n",
|
||||||
|
" ]\n",
|
||||||
|
"\n",
|
||||||
|
" y_past = X_past.pop(target_column_name).values.astype(np.float)\n",
|
||||||
|
" y_future = X_future.pop(target_column_name).values.astype(np.float)\n",
|
||||||
|
"\n",
|
||||||
|
" # Now take y_future and turn it into question marks\n",
|
||||||
|
" y_query = y_future.copy().astype(np.float) # because sometimes life hands you an int\n",
|
||||||
|
" y_query.fill(np.NaN)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" print(\"X_past is \" + str(X_past.shape) + \" - shaped\")\n",
|
||||||
|
" print(\"X_future is \" + str(X_future.shape) + \" - shaped\")\n",
|
||||||
|
" print(\"y_past is \" + str(y_past.shape) + \" - shaped\")\n",
|
||||||
|
" print(\"y_query is \" + str(y_query.shape) + \" - shaped\")\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" X_pred = pd.concat([X_past, X_future])\n",
|
||||||
|
" y_pred = np.concatenate([y_past, y_query])\n",
|
||||||
|
" return X_pred, y_pred"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Let's see where the context data ends - it ends, by construction, just before the testing data starts."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"print(X_context.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))\n",
|
||||||
|
"print( X_away.groupby(GRAIN_COLUMN_NAME)[TIME_COLUMN_NAME].agg(['min','max','count']))\n",
|
||||||
|
"X_context.tail(5)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Since the length of the lookback is 3, \n",
|
||||||
|
"# we need to add 3 periods from the context to the request\n",
|
||||||
|
"# so that the model has the data it needs\n",
|
||||||
|
"\n",
|
||||||
|
"# Put the X and y back together for a while. \n",
|
||||||
|
"# They like each other and it makes them happy.\n",
|
||||||
|
"X_context[TARGET_COLUMN_NAME] = y_context\n",
|
||||||
|
"X_away[TARGET_COLUMN_NAME] = y_away\n",
|
||||||
|
"fulldata = pd.concat([X_context, X_away])\n",
|
||||||
|
"\n",
|
||||||
|
"# forecast origin is the last point of data, which is one 1-hr period before test\n",
|
||||||
|
"forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)\n",
|
||||||
|
"# it is indeed the last point of the context\n",
|
||||||
|
"assert forecast_origin == X_context[TIME_COLUMN_NAME].max()\n",
|
||||||
|
"print(\"Forecast origin: \" + str(forecast_origin))\n",
|
||||||
|
" \n",
|
||||||
|
"# the model uses lags and rolling windows to look back in time\n",
|
||||||
|
"n_lookback_periods = max(max(lags), rolling_window_length)\n",
|
||||||
|
"lookback = pd.DateOffset(hours=n_lookback_periods)\n",
|
||||||
|
"\n",
|
||||||
|
"horizon = pd.DateOffset(hours=max_horizon)\n",
|
||||||
|
"\n",
|
||||||
|
"# now make the forecast query from context (refer to figure)\n",
|
||||||
|
"X_pred, y_pred = make_forecasting_query(fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME,\n",
|
||||||
|
" forecast_origin, horizon, lookback)\n",
|
||||||
|
"\n",
|
||||||
|
"# show the forecast request aligned\n",
|
||||||
|
"X_show = X_pred.copy()\n",
|
||||||
|
"X_show[TARGET_COLUMN_NAME] = y_pred\n",
|
||||||
|
"X_show"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Note that the forecast origin is at 17:00 for both grains, and periods from 18:00 are to be forecast."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Now everything works\n",
|
||||||
|
"y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)\n",
|
||||||
|
"\n",
|
||||||
|
"# show the forecast aligned\n",
|
||||||
|
"X_show = xy_away.reset_index()\n",
|
||||||
|
"# without the generated features\n",
|
||||||
|
"X_show[['date', 'grain', 'ext_predictor', '_automl_target_col']]\n",
|
||||||
|
"# prediction is in _automl_target_col"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"authors": [
|
||||||
|
{
|
||||||
|
"name": "erwright, nirovins"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3.6",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python36"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.6.7"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 2
|
||||||
|
}
|
||||||
@@ -0,0 +1,9 @@
|
|||||||
|
name: automl-forecasting-function
|
||||||
|
dependencies:
|
||||||
|
- pip:
|
||||||
|
- azureml-sdk
|
||||||
|
- azureml-train-automl
|
||||||
|
- azureml-widgets
|
||||||
|
- pandas_ml
|
||||||
|
- statsmodels
|
||||||
|
- matplotlib
|
||||||
@@ -89,8 +89,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# choose a name for the run history container in the workspace\n",
|
"# choose a name for the run history container in the workspace\n",
|
||||||
"experiment_name = 'automl-ojforecasting'\n",
|
"experiment_name = 'automl-ojforecasting'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/automl-local-ojforecasting'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -100,7 +98,6 @@
|
|||||||
"output['Workspace'] = ws.name\n",
|
"output['Workspace'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Run History Name'] = experiment_name\n",
|
"output['Run History Name'] = experiment_name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -247,7 +244,6 @@
|
|||||||
"|**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models\n",
|
"|**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models\n",
|
||||||
"|**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models\n",
|
"|**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models\n",
|
||||||
"|**debug_log**|Log file path for writing debugging information\n",
|
"|**debug_log**|Log file path for writing debugging information\n",
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
|
|
||||||
"|**time_column_name**|Name of the datetime column in the input data|\n",
|
"|**time_column_name**|Name of the datetime column in the input data|\n",
|
||||||
"|**grain_column_names**|Name(s) of the columns defining individual series in the input data|\n",
|
"|**grain_column_names**|Name(s) of the columns defining individual series in the input data|\n",
|
||||||
"|**drop_column_names**|Name(s) of columns to drop prior to modeling|\n",
|
"|**drop_column_names**|Name(s) of columns to drop prior to modeling|\n",
|
||||||
@@ -276,7 +272,6 @@
|
|||||||
" n_cross_validations=3,\n",
|
" n_cross_validations=3,\n",
|
||||||
" enable_voting_ensemble=False,\n",
|
" enable_voting_ensemble=False,\n",
|
||||||
" enable_stack_ensemble=False,\n",
|
" enable_stack_ensemble=False,\n",
|
||||||
" path=project_folder,\n",
|
|
||||||
" verbosity=logging.INFO,\n",
|
" verbosity=logging.INFO,\n",
|
||||||
" **time_series_settings)"
|
" **time_series_settings)"
|
||||||
]
|
]
|
||||||
@@ -668,7 +663,7 @@
|
|||||||
"for p in ['azureml-train-automl', 'azureml-core']:\n",
|
"for p in ['azureml-train-automl', 'azureml-core']:\n",
|
||||||
" print('{}\\t{}'.format(p, dependencies[p]))\n",
|
" print('{}\\t{}'.format(p, dependencies[p]))\n",
|
||||||
"\n",
|
"\n",
|
||||||
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-train-automl'])\n",
|
"myenv = CondaDependencies.create(conda_packages=['numpy>=1.16.0,<=1.16.2','scikit-learn','fbprophet==0.5'], pip_packages=['azureml-defaults','azureml-train-automl'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"myenv.save_to_file('.', conda_env_file_name)"
|
"myenv.save_to_file('.', conda_env_file_name)"
|
||||||
]
|
]
|
||||||
@@ -705,40 +700,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a Container Image"
|
"### Deploy the model as a Web Service on Azure Container Instance"
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.image import Image, ContainerImage\n",
|
|
||||||
"\n",
|
|
||||||
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
|
|
||||||
" execution_script = script_file_name,\n",
|
|
||||||
" conda_file = conda_env_file_name,\n",
|
|
||||||
" tags = {'type': \"automl-forecasting\"},\n",
|
|
||||||
" description = \"Image for automl forecasting sample\")\n",
|
|
||||||
"\n",
|
|
||||||
"image = Image.create(name = \"automl-fcast-image\",\n",
|
|
||||||
" # this is the model object \n",
|
|
||||||
" models = [model],\n",
|
|
||||||
" image_config = image_config, \n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"\n",
|
|
||||||
"image.wait_for_creation(show_output = True)\n",
|
|
||||||
"\n",
|
|
||||||
"if image.creation_state == 'Failed':\n",
|
|
||||||
" print(\"Image build log at: \" + image.image_build_log_uri)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Deploy the Image as a Web Service on Azure Container Instance"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -747,29 +709,23 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"from azureml.core.model import InferenceConfig\n",
|
||||||
"from azureml.core.webservice import AciWebservice\n",
|
"from azureml.core.webservice import AciWebservice\n",
|
||||||
|
"from azureml.core.webservice import Webservice\n",
|
||||||
|
"from azureml.core.model import Model\n",
|
||||||
|
"\n",
|
||||||
|
"inference_config = InferenceConfig(runtime = \"python\", \n",
|
||||||
|
" entry_script = script_file_name,\n",
|
||||||
|
" conda_file = conda_env_file_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||||
" memory_gb = 2, \n",
|
" memory_gb = 2, \n",
|
||||||
" tags = {'type': \"automl-forecasting\"},\n",
|
" tags = {'type': \"automl-forecasting\"},\n",
|
||||||
" description = \"Automl forecasting sample service\")"
|
" description = \"Automl forecasting sample service\")\n",
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.webservice import Webservice\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"aci_service_name = 'automl-forecast-01'\n",
|
"aci_service_name = 'automl-forecast-01'\n",
|
||||||
"print(aci_service_name)\n",
|
"print(aci_service_name)\n",
|
||||||
"\n",
|
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||||
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
|
|
||||||
" image = image,\n",
|
|
||||||
" name = aci_service_name,\n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"aci_service.wait_for_deployment(True)\n",
|
"aci_service.wait_for_deployment(True)\n",
|
||||||
"print(aci_service.state)"
|
"print(aci_service.state)"
|
||||||
]
|
]
|
||||||
@@ -849,7 +805,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.6.8"
|
"version": "3.6.7"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
|||||||
@@ -93,7 +93,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the experiment.\n",
|
"# Choose a name for the experiment.\n",
|
||||||
"experiment_name = 'automl-local-missing-data'\n",
|
"experiment_name = 'automl-local-missing-data'\n",
|
||||||
"project_folder = './sample_projects/automl-local-missing-data'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -103,7 +102,6 @@
|
|||||||
"output['Workspace'] = ws.name\n",
|
"output['Workspace'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -166,8 +164,7 @@
|
|||||||
"|**experiment_exit_score**|*double* value indicating the target for *primary_metric*. <br>Once the target is surpassed the run terminates.|\n",
|
"|**experiment_exit_score**|*double* value indicating the target for *primary_metric*. <br>Once the target is surpassed the run terminates.|\n",
|
||||||
"|**blacklist_models**|*List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGD</i><br><i>MultinomialNaiveBayes</i><br><i>BernoulliNaiveBayes</i><br><i>SVM</i><br><i>LinearSVM</i><br><i>KNN</i><br><i>DecisionTree</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>GradientBoosting</i><br><i>TensorFlowDNN</i><br><i>TensorFlowLinearClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i>|\n",
|
"|**blacklist_models**|*List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run.<br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGD</i><br><i>MultinomialNaiveBayes</i><br><i>BernoulliNaiveBayes</i><br><i>SVM</i><br><i>LinearSVM</i><br><i>KNN</i><br><i>DecisionTree</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>GradientBoosting</i><br><i>TensorFlowDNN</i><br><i>TensorFlowLinearClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i>|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|"
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -186,8 +183,7 @@
|
|||||||
" blacklist_models = ['KNN','LinearSVM'],\n",
|
" blacklist_models = ['KNN','LinearSVM'],\n",
|
||||||
" verbosity = logging.INFO,\n",
|
" verbosity = logging.INFO,\n",
|
||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train)"
|
||||||
" path = project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -0,0 +1,593 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||||
|
"\n",
|
||||||
|
"Licensed under the MIT License."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
""
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Automated Machine Learning\n",
|
||||||
|
"_**Regression on remote compute using Computer Hardware dataset with model explanations**_\n",
|
||||||
|
"\n",
|
||||||
|
"## Contents\n",
|
||||||
|
"1. [Introduction](#Introduction)\n",
|
||||||
|
"1. [Setup](#Setup)\n",
|
||||||
|
"1. [Train](#Train)\n",
|
||||||
|
"1. [Results](#Results)\n",
|
||||||
|
"1. [Explanations](#Explanations)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Introduction\n",
|
||||||
|
"\n",
|
||||||
|
"In this example we use the Hardware Performance Dataset to showcase how you can use AutoML for a simple regression problem. After training AutoML models for this regression data set, we show how you can compute model explanations on your remote compute using a sample explainer script.\n",
|
||||||
|
"\n",
|
||||||
|
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
|
||||||
|
"\n",
|
||||||
|
"In this notebook you will learn how to:\n",
|
||||||
|
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
||||||
|
"2. Configure AutoML using `AutoMLConfig`.\n",
|
||||||
|
"3. Train the model using remote compute.\n",
|
||||||
|
"4. Explore the results.\n",
|
||||||
|
"5. Setup remote compute for computing the model explanations for a given AutoML model.\n",
|
||||||
|
"6. Start an AzureML experiment on your remote compute to compute explanations for an AutoML model.\n",
|
||||||
|
"7. Download the feature importance for engineered features and visualize the explanations for engineered features. \n",
|
||||||
|
"8. Download the feature importance for raw features and visualize the explanations for raw features. \n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Setup\n",
|
||||||
|
"\n",
|
||||||
|
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import logging\n",
|
||||||
|
"\n",
|
||||||
|
"from matplotlib import pyplot as plt\n",
|
||||||
|
"import pandas as pd\n",
|
||||||
|
"import os\n",
|
||||||
|
"\n",
|
||||||
|
"import azureml.core\n",
|
||||||
|
"from azureml.core.experiment import Experiment\n",
|
||||||
|
"from azureml.core.workspace import Workspace\n",
|
||||||
|
"from azureml.core.dataset import Dataset\n",
|
||||||
|
"from azureml.train.automl import AutoMLConfig"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"ws = Workspace.from_config()\n",
|
||||||
|
"\n",
|
||||||
|
"# choose a name for experiment\n",
|
||||||
|
"experiment_name = 'automl-regression-computer-hardware'\n",
|
||||||
|
"\n",
|
||||||
|
"experiment=Experiment(ws, experiment_name)\n",
|
||||||
|
"\n",
|
||||||
|
"output = {}\n",
|
||||||
|
"output['SDK version'] = azureml.core.VERSION\n",
|
||||||
|
"output['Subscription ID'] = ws.subscription_id\n",
|
||||||
|
"output['Workspace'] = ws.name\n",
|
||||||
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
|
"output['Location'] = ws.location\n",
|
||||||
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
|
"outputDf.T"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Create or Attach existing AmlCompute\n",
|
||||||
|
"You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n",
|
||||||
|
"#### Creation of AmlCompute takes approximately 5 minutes. \n",
|
||||||
|
"If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
||||||
|
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.compute import AmlCompute\n",
|
||||||
|
"from azureml.core.compute import ComputeTarget\n",
|
||||||
|
"\n",
|
||||||
|
"# Choose a name for your cluster.\n",
|
||||||
|
"amlcompute_cluster_name = \"automlcl\"\n",
|
||||||
|
"\n",
|
||||||
|
"found = False\n",
|
||||||
|
"# Check if this compute target already exists in the workspace.\n",
|
||||||
|
"cts = ws.compute_targets\n",
|
||||||
|
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
|
||||||
|
" found = True\n",
|
||||||
|
" print('Found existing compute target.')\n",
|
||||||
|
" compute_target = cts[amlcompute_cluster_name]\n",
|
||||||
|
" \n",
|
||||||
|
"if not found:\n",
|
||||||
|
" print('Creating a new compute target...')\n",
|
||||||
|
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
|
||||||
|
" #vm_priority = 'lowpriority', # optional\n",
|
||||||
|
" max_nodes = 6)\n",
|
||||||
|
"\n",
|
||||||
|
" # Create the cluster.\n",
|
||||||
|
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
|
||||||
|
" \n",
|
||||||
|
"print('Checking cluster status...')\n",
|
||||||
|
"# Can poll for a minimum number of nodes and for a specific timeout.\n",
|
||||||
|
"# If no min_node_count is provided, it will use the scale settings for the cluster.\n",
|
||||||
|
"compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
|
||||||
|
" \n",
|
||||||
|
"# For a more detailed view of current AmlCompute status, use get_status()."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Conda Dependecies for AutoML training experiment\n",
|
||||||
|
"\n",
|
||||||
|
"Create the conda dependencies for running AutoML experiment on remote compute."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.runconfig import RunConfiguration\n",
|
||||||
|
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||||
|
"import pkg_resources\n",
|
||||||
|
"\n",
|
||||||
|
"# create a new RunConfig object\n",
|
||||||
|
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||||
|
"\n",
|
||||||
|
"# Set compute target to AmlCompute\n",
|
||||||
|
"conda_run_config.target = compute_target\n",
|
||||||
|
"conda_run_config.environment.docker.enabled = True\n",
|
||||||
|
"\n",
|
||||||
|
"cd = CondaDependencies.create(conda_packages=['numpy','py-xgboost<=0.80'])\n",
|
||||||
|
"conda_run_config.environment.python.conda_dependencies = cd"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Setup Training and Test Data for AutoML experiment\n",
|
||||||
|
"\n",
|
||||||
|
"Here we create the train and test datasets for hardware performance dataset. We also register the datasets in your workspace using a name so that these datasets may be accessed from the remote compute."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Data source\n",
|
||||||
|
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/machineData.csv\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Create dataset from the url\n",
|
||||||
|
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
|
||||||
|
"\n",
|
||||||
|
"# Split the dataset into train and test datasets\n",
|
||||||
|
"train_dataset, test_dataset = dataset.random_split(percentage=0.8, seed=223)\n",
|
||||||
|
"\n",
|
||||||
|
"# Register the train dataset with your workspace\n",
|
||||||
|
"train_dataset.register(workspace = ws, name = 'hardware_performance_train_dataset',\n",
|
||||||
|
" description = 'hardware performance training data',\n",
|
||||||
|
" create_new_version=True)\n",
|
||||||
|
"\n",
|
||||||
|
"# Register the test dataset with your workspace\n",
|
||||||
|
"test_dataset.register(workspace = ws, name = 'hardware_performance_test_dataset',\n",
|
||||||
|
" description = 'hardware performance test data',\n",
|
||||||
|
" create_new_version=True)\n",
|
||||||
|
"\n",
|
||||||
|
"# Drop the labeled column from the train dataset\n",
|
||||||
|
"X_train = train_dataset.drop_columns(columns=['ERP'])\n",
|
||||||
|
"y_train = train_dataset.keep_columns(columns=['ERP'], validate=True)\n",
|
||||||
|
"\n",
|
||||||
|
"# Drop the labeled column from the test dataset\n",
|
||||||
|
"X_test = test_dataset.drop_columns(columns=['ERP']) \n",
|
||||||
|
"\n",
|
||||||
|
"# Display the top rows in the train dataset\n",
|
||||||
|
"X_train.take(5).to_pandas_dataframe()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Train\n",
|
||||||
|
"\n",
|
||||||
|
"Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n",
|
||||||
|
"\n",
|
||||||
|
"|Property|Description|\n",
|
||||||
|
"|-|-|\n",
|
||||||
|
"|**task**|classification or regression|\n",
|
||||||
|
"|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|\n",
|
||||||
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
|
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||||
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
|
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
||||||
|
"\n",
|
||||||
|
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"automl_settings = {\n",
|
||||||
|
" \"iteration_timeout_minutes\": 5,\n",
|
||||||
|
" \"iterations\": 10,\n",
|
||||||
|
" \"n_cross_validations\": 2,\n",
|
||||||
|
" \"primary_metric\": 'spearman_correlation',\n",
|
||||||
|
" \"preprocess\": True,\n",
|
||||||
|
" \"max_concurrent_iterations\": 1,\n",
|
||||||
|
" \"verbosity\": logging.INFO,\n",
|
||||||
|
"}\n",
|
||||||
|
"\n",
|
||||||
|
"automl_config = AutoMLConfig(task = 'regression',\n",
|
||||||
|
" debug_log = 'automl_errors_model_exp.log',\n",
|
||||||
|
" run_configuration=conda_run_config,\n",
|
||||||
|
" X = X_train,\n",
|
||||||
|
" y = y_train,\n",
|
||||||
|
" **automl_settings\n",
|
||||||
|
" )"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
|
||||||
|
"In this example, we specify `show_output = True` to print currently running iterations to the console."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"remote_run = experiment.submit(automl_config, show_output = True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"remote_run"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Results"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Widget for Monitoring Runs\n",
|
||||||
|
"\n",
|
||||||
|
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
|
||||||
|
"\n",
|
||||||
|
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.widgets import RunDetails\n",
|
||||||
|
"RunDetails(remote_run).show() "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Explanations\n",
|
||||||
|
"This section will walk you through the workflow to compute model explanations for an AutoML model on your remote compute.\n",
|
||||||
|
"\n",
|
||||||
|
"### Retrieve any AutoML Model for explanations\n",
|
||||||
|
"\n",
|
||||||
|
"Below we select the some AutoML pipeline from our iterations. The `get_output` method returns the a AutoML run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"automl_run, fitted_model = remote_run.get_output(iteration=5)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Setup model explanation run on the remote compute\n",
|
||||||
|
"The following section provides details on how to setup an AzureML experiment to run model explanations for an AutoML model on your remote compute."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Sample script used for computing explanations\n",
|
||||||
|
"View the sample script for computing the model explanations for your AutoML model on remote compute."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"with open('train_explainer.py', 'r') as cefr:\n",
|
||||||
|
" print(cefr.read())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Substitute values in your sample script\n",
|
||||||
|
"The following cell shows how you change the values in the sample script so that you can change the sample script according to your experiment and dataset."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import shutil\n",
|
||||||
|
"\n",
|
||||||
|
"# create script folder\n",
|
||||||
|
"script_folder = './sample_projects/automl-regression-computer-hardware'\n",
|
||||||
|
"if not os.path.exists(script_folder):\n",
|
||||||
|
" os.makedirs(script_folder)\n",
|
||||||
|
"\n",
|
||||||
|
"# Copy the sample script to script folder.\n",
|
||||||
|
"shutil.copy('train_explainer.py', script_folder)\n",
|
||||||
|
"\n",
|
||||||
|
"# Create the explainer script that will run on the remote compute.\n",
|
||||||
|
"script_file_name = script_folder + '/train_explainer.py'\n",
|
||||||
|
"\n",
|
||||||
|
"# Open the sample script for modification\n",
|
||||||
|
"with open(script_file_name, 'r') as cefr:\n",
|
||||||
|
" content = cefr.read()\n",
|
||||||
|
"\n",
|
||||||
|
"# Replace the values in train_explainer.py file with the appropriate values\n",
|
||||||
|
"content = content.replace('<<experimnet_name>>', automl_run.experiment.name) # your experiment name.\n",
|
||||||
|
"content = content.replace('<<run_id>>', automl_run.id) # Run-id of the AutoML run for which you want to explain the model.\n",
|
||||||
|
"content = content.replace('<<target_column_name>>', 'ERP') # Your target column name\n",
|
||||||
|
"content = content.replace('<<task>>', 'regression') # Training task type\n",
|
||||||
|
"# Name of your training dataset register with your workspace\n",
|
||||||
|
"content = content.replace('<<train_dataset_name>>', 'hardware_performance_train_dataset') \n",
|
||||||
|
"# Name of your test dataset register with your workspace\n",
|
||||||
|
"content = content.replace('<<test_dataset_name>>', 'hardware_performance_test_dataset')\n",
|
||||||
|
"\n",
|
||||||
|
"# Write sample file into your script folder.\n",
|
||||||
|
"with open(script_file_name, 'w') as cefw:\n",
|
||||||
|
" cefw.write(content)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Create conda configuration for model explanations experiment\n",
|
||||||
|
"We need `azureml-explain-model`, `azureml-train-automl` and `azureml-core` packages for computing model explanations for your AutoML model on remote compute."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.runconfig import RunConfiguration\n",
|
||||||
|
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||||
|
"import pkg_resources\n",
|
||||||
|
"\n",
|
||||||
|
"# create a new RunConfig object\n",
|
||||||
|
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||||
|
"\n",
|
||||||
|
"# Set compute target to AmlCompute\n",
|
||||||
|
"conda_run_config.target = compute_target\n",
|
||||||
|
"conda_run_config.environment.docker.enabled = True\n",
|
||||||
|
"azureml_pip_packages = [\n",
|
||||||
|
" 'azureml-train-automl', 'azureml-core', 'azureml-explain-model'\n",
|
||||||
|
"]\n",
|
||||||
|
"\n",
|
||||||
|
"# specify CondaDependencies obj\n",
|
||||||
|
"conda_run_config.environment.python.conda_dependencies = CondaDependencies.create(\n",
|
||||||
|
" conda_packages=['scikit-learn', 'numpy','py-xgboost<=0.80'],\n",
|
||||||
|
" pip_packages=azureml_pip_packages)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Submit the experiment for model explanations\n",
|
||||||
|
"Submit the experiment with the above `run_config` and the sample script for computing explanations."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Now submit a run on AmlCompute for model explanations\n",
|
||||||
|
"from azureml.core.script_run_config import ScriptRunConfig\n",
|
||||||
|
"\n",
|
||||||
|
"script_run_config = ScriptRunConfig(source_directory=script_folder,\n",
|
||||||
|
" script='train_explainer.py',\n",
|
||||||
|
" run_config=conda_run_config)\n",
|
||||||
|
"\n",
|
||||||
|
"run = experiment.submit(script_run_config)\n",
|
||||||
|
"\n",
|
||||||
|
"# Show run details\n",
|
||||||
|
"run"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%%time\n",
|
||||||
|
"# Shows output of the run on stdout.\n",
|
||||||
|
"run.wait_for_completion(show_output=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Feature importance and explanation dashboard\n",
|
||||||
|
"In this section we describe how you can download the explanation results from the explanations experiment and visualize the feature importance for your AutoML model. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Setup for visualizing the model explanation results\n",
|
||||||
|
"For visualizing the explanation results for the *fitted_model* we need to perform the following steps:-\n",
|
||||||
|
"1. Featurize test data samples.\n",
|
||||||
|
"\n",
|
||||||
|
"The *automl_explainer_setup_obj* contains all the structures from above list. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.train.automl.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations\n",
|
||||||
|
"explainer_setup_class = automl_setup_model_explanations(fitted_model, 'regression', X_test=X_test)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Download engineered feature importance from artifact store\n",
|
||||||
|
"You can use *ExplanationClient* to download the engineered feature explanations from the artifact store of the *automl_run*. You can also use ExplanationDashboard to view the dash board visualization of the feature importance values of the engineered features."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.explain.model._internal.explanation_client import ExplanationClient\n",
|
||||||
|
"from azureml.contrib.interpret.visualize import ExplanationDashboard\n",
|
||||||
|
"client = ExplanationClient.from_run(automl_run)\n",
|
||||||
|
"engineered_explanations = client.download_model_explanation(raw=False)\n",
|
||||||
|
"print(engineered_explanations.get_feature_importance_dict())\n",
|
||||||
|
"ExplanationDashboard(engineered_explanations, explainer_setup_class.automl_estimator, explainer_setup_class.X_test_transform)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Download raw feature importance from artifact store\n",
|
||||||
|
"You can use *ExplanationClient* to download the raw feature explanations from the artifact store of the *automl_run*. You can also use ExplanationDashboard to view the dash board visualization of the feature importance values of the raw features."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"raw_explanations = client.download_model_explanation(raw=True)\n",
|
||||||
|
"print(raw_explanations.get_feature_importance_dict())\n",
|
||||||
|
"ExplanationDashboard(raw_explanations, explainer_setup_class.automl_pipeline, explainer_setup_class.X_test_raw)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"authors": [
|
||||||
|
{
|
||||||
|
"name": "v-rasav"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3.6",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python36"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.6.7"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 2
|
||||||
|
}
|
||||||
@@ -0,0 +1,11 @@
|
|||||||
|
name: auto-ml-model-explanations-remote-compute
|
||||||
|
dependencies:
|
||||||
|
- pip:
|
||||||
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
|
- azureml-train-automl
|
||||||
|
- azureml-widgets
|
||||||
|
- matplotlib
|
||||||
|
- pandas_ml
|
||||||
|
- azureml-explain-model
|
||||||
|
- azureml-contrib-interpret
|
||||||
@@ -0,0 +1,64 @@
|
|||||||
|
# Copyright (c) Microsoft. All rights reserved.
|
||||||
|
# Licensed under the MIT license.
|
||||||
|
import os
|
||||||
|
|
||||||
|
from azureml.core.run import Run
|
||||||
|
from azureml.core.experiment import Experiment
|
||||||
|
from sklearn.externals import joblib
|
||||||
|
from azureml.core.dataset import Dataset
|
||||||
|
from azureml.train.automl.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations
|
||||||
|
from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel
|
||||||
|
from azureml.explain.model.mimic_wrapper import MimicWrapper
|
||||||
|
from automl.client.core.common.constants import MODEL_PATH
|
||||||
|
|
||||||
|
|
||||||
|
OUTPUT_DIR = './outputs/'
|
||||||
|
os.makedirs(OUTPUT_DIR, exist_ok=True)
|
||||||
|
|
||||||
|
# Get workspace from the run context
|
||||||
|
run = Run.get_context()
|
||||||
|
ws = run.experiment.workspace
|
||||||
|
|
||||||
|
# Get the AutoML run object from the experiment name and the workspace
|
||||||
|
experiment = Experiment(ws, '<<experimnet_name>>')
|
||||||
|
automl_run = Run(experiment=experiment, run_id='<<run_id>>')
|
||||||
|
|
||||||
|
# Download the best model from the artifact store
|
||||||
|
automl_run.download_file(name=MODEL_PATH, output_file_path='model.pkl')
|
||||||
|
|
||||||
|
# Load the AutoML model into memory
|
||||||
|
fitted_model = joblib.load('model.pkl')
|
||||||
|
|
||||||
|
# Get the train dataset from the workspace
|
||||||
|
train_dataset = Dataset.get_by_name(workspace=ws, name='<<train_dataset_name>>')
|
||||||
|
# Drop the lablled column to get the training set.
|
||||||
|
X_train = train_dataset.drop_columns(columns=['<<target_column_name>>'])
|
||||||
|
y_train = train_dataset.keep_columns(columns=['<<target_column_name>>'], validate=True)
|
||||||
|
|
||||||
|
# Get the train dataset from the workspace
|
||||||
|
test_dataset = Dataset.get_by_name(workspace=ws, name='<<test_dataset_name>>')
|
||||||
|
# Drop the lablled column to get the testing set.
|
||||||
|
X_test = test_dataset.drop_columns(columns=['<<target_column_name>>'])
|
||||||
|
|
||||||
|
# Setup the class for explaining the AtuoML models
|
||||||
|
automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, '<<task>>',
|
||||||
|
X=X_train, X_test=X_test,
|
||||||
|
y=y_train)
|
||||||
|
|
||||||
|
# Initialize the Mimic Explainer
|
||||||
|
explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel,
|
||||||
|
init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run,
|
||||||
|
features=automl_explainer_setup_obj.engineered_feature_names,
|
||||||
|
feature_maps=[automl_explainer_setup_obj.feature_map],
|
||||||
|
classes=automl_explainer_setup_obj.classes)
|
||||||
|
|
||||||
|
# Compute the engineered explanations
|
||||||
|
engineered_explanations = explainer.explain(['local', 'global'],
|
||||||
|
eval_dataset=automl_explainer_setup_obj.X_test_transform)
|
||||||
|
|
||||||
|
# Compute the raw explanations
|
||||||
|
raw_explanations = explainer.explain(['local', 'global'], get_raw=True,
|
||||||
|
raw_feature_names=automl_explainer_setup_obj.raw_feature_names,
|
||||||
|
eval_dataset=automl_explainer_setup_obj.X_test_transform)
|
||||||
|
|
||||||
|
print("Engineered and raw explanations computed successfully")
|
||||||
@@ -21,14 +21,16 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Automated Machine Learning\n",
|
"# Automated Machine Learning\n",
|
||||||
"_**Explain classification model and visualize the explanation**_\n",
|
"_**Explain classification model, visualize the explanation and operationalize the explainer along with AutoML model**_\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Contents\n",
|
"## Contents\n",
|
||||||
"1. [Introduction](#Introduction)\n",
|
"1. [Introduction](#Introduction)\n",
|
||||||
"1. [Setup](#Setup)\n",
|
"1. [Setup](#Setup)\n",
|
||||||
"1. [Data](#Data)\n",
|
"1. [Data](#Data)\n",
|
||||||
"1. [Train](#Train)\n",
|
"1. [Train](#Train)\n",
|
||||||
"1. [Results](#Results)"
|
"1. [Results](#Results)\n",
|
||||||
|
"1. [Explanations](#Explanations)\n",
|
||||||
|
"1. [Operationailze](#Operationailze)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -45,7 +47,8 @@
|
|||||||
"2. Instantiating AutoMLConfig\n",
|
"2. Instantiating AutoMLConfig\n",
|
||||||
"3. Training the Model using local compute and explain the model\n",
|
"3. Training the Model using local compute and explain the model\n",
|
||||||
"4. Visualization model's feature importance in widget\n",
|
"4. Visualization model's feature importance in widget\n",
|
||||||
"5. Explore best model's explanation"
|
"5. Explore any model's explanation\n",
|
||||||
|
"6. Operationalize the AutoML model and the explaination model"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -69,7 +72,9 @@
|
|||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"from azureml.core.experiment import Experiment\n",
|
"from azureml.core.experiment import Experiment\n",
|
||||||
"from azureml.core.workspace import Workspace\n",
|
"from azureml.core.workspace import Workspace\n",
|
||||||
"from azureml.train.automl import AutoMLConfig"
|
"from azureml.train.automl import AutoMLConfig\n",
|
||||||
|
"from azureml.core.dataset import Dataset\n",
|
||||||
|
"from azureml.explain.model._internal.explanation_client import ExplanationClient"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -82,8 +87,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# choose a name for experiment\n",
|
"# choose a name for experiment\n",
|
||||||
"experiment_name = 'automl-model-explanation'\n",
|
"experiment_name = 'automl-model-explanation'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/automl-model-explanation'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment=Experiment(ws, experiment_name)\n",
|
"experiment=Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -93,7 +96,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -107,29 +109,42 @@
|
|||||||
"## Data"
|
"## Data"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Training Data"
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from sklearn import datasets\n",
|
"train_data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv\"\n",
|
||||||
"\n",
|
"train_dataset = Dataset.Tabular.from_delimited_files(train_data)\n",
|
||||||
"iris = datasets.load_iris()\n",
|
"X_train = train_dataset.drop_columns(columns=['y']).to_pandas_dataframe()\n",
|
||||||
"y = iris.target\n",
|
"y_train = train_dataset.keep_columns(columns=['y'], validate=True).to_pandas_dataframe()"
|
||||||
"X = iris.data\n",
|
]
|
||||||
"\n",
|
},
|
||||||
"features = iris.feature_names\n",
|
{
|
||||||
"\n",
|
"cell_type": "markdown",
|
||||||
"from sklearn.model_selection import train_test_split\n",
|
"metadata": {},
|
||||||
"X_train, X_test, y_train, y_test = train_test_split(X,\n",
|
"source": [
|
||||||
" y,\n",
|
"### Test Data"
|
||||||
" test_size=0.1,\n",
|
]
|
||||||
" random_state=100,\n",
|
},
|
||||||
" stratify=y)\n",
|
{
|
||||||
"\n",
|
"cell_type": "code",
|
||||||
"X_train = pd.DataFrame(X_train, columns=features)\n",
|
"execution_count": null,
|
||||||
"X_test = pd.DataFrame(X_test, columns=features)"
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"test_data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv\"\n",
|
||||||
|
"test_dataset = Dataset.Tabular.from_delimited_files(test_data)\n",
|
||||||
|
"X_test = test_dataset.drop_columns(columns=['y']).to_pandas_dataframe()\n",
|
||||||
|
"y_test = test_dataset.keep_columns(columns=['y'], validate=True).to_pandas_dataframe()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -148,10 +163,7 @@
|
|||||||
"|**iterations**|Number of iterations. In each iteration Auto ML trains the data with a specific pipeline|\n",
|
"|**iterations**|Number of iterations. In each iteration Auto ML trains the data with a specific pipeline|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
||||||
"|**X_valid**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**model_explainability**|Indicate to explain each trained pipeline or not |"
|
||||||
"|**y_valid**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
|
||||||
"|**model_explainability**|Indicate to explain each trained pipeline or not |\n",
|
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. |"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -166,12 +178,11 @@
|
|||||||
" iteration_timeout_minutes = 200,\n",
|
" iteration_timeout_minutes = 200,\n",
|
||||||
" iterations = 10,\n",
|
" iterations = 10,\n",
|
||||||
" verbosity = logging.INFO,\n",
|
" verbosity = logging.INFO,\n",
|
||||||
|
" preprocess = True,\n",
|
||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train,\n",
|
||||||
" X_valid = X_test,\n",
|
" n_cross_validations = 5,\n",
|
||||||
" y_valid = y_test,\n",
|
" model_explainability=True)"
|
||||||
" model_explainability=True,\n",
|
|
||||||
" path=project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -254,55 +265,15 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Best Model 's explanation\n",
|
"### Best Model 's explanation\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Retrieve the explanation from the best_run. And explanation information includes:\n",
|
"Retrieve the explanation from the *best_run* which includes explanations for engineered features and raw features."
|
||||||
"\n",
|
|
||||||
"1.\tshap_values: The explanation information generated by shap lib\n",
|
|
||||||
"2.\texpected_values: The expected value of the model applied to set of X_train data.\n",
|
|
||||||
"3.\toverall_summary: The model level feature importance values sorted in descending order\n",
|
|
||||||
"4.\toverall_imp: The feature names sorted in the same order as in overall_summary\n",
|
|
||||||
"5.\tper_class_summary: The class level feature importance values sorted in descending order. Only available for the classification case\n",
|
|
||||||
"6.\tper_class_imp: The feature names sorted in the same order as in per_class_summary. Only available for the classification case\n",
|
|
||||||
"\n",
|
|
||||||
"Note:- The **retrieve_model_explanation()** API only works in case AutoML has been configured with **'model_explainability'** flag set to **True**. "
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.train.automl.automlexplainer import retrieve_model_explanation\n",
|
|
||||||
"\n",
|
|
||||||
"shap_values, expected_values, overall_summary, overall_imp, per_class_summary, per_class_imp = \\\n",
|
|
||||||
" retrieve_model_explanation(best_run)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"print(overall_summary)\n",
|
|
||||||
"print(overall_imp)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"print(per_class_summary)\n",
|
|
||||||
"print(per_class_imp)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Beside retrieve the existed model explanation information, explain the model with different train/test data"
|
"#### Download engineered feature importance from artifact store\n",
|
||||||
|
"You can use *ExplanationClient* to download the engineered feature explanations from the artifact store of the *best_run*."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -311,10 +282,65 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.train.automl.automlexplainer import explain_model\n",
|
"client = ExplanationClient.from_run(best_run)\n",
|
||||||
|
"engineered_explanations = client.download_model_explanation(raw=False)\n",
|
||||||
|
"print(engineered_explanations.get_feature_importance_dict())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Download raw feature importance from artifact store\n",
|
||||||
|
"You can use *ExplanationClient* to download the raw feature explanations from the artifact store of the *best_run*."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"client = ExplanationClient.from_run(best_run)\n",
|
||||||
|
"raw_explanations = client.download_model_explanation(raw=True)\n",
|
||||||
|
"print(raw_explanations.get_feature_importance_dict())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Explanations\n",
|
||||||
|
"In this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance and raw feature importance based on your test data. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Retrieve any other AutoML model from training"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"automl_run, fitted_model = local_run.get_output(iteration=0)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Setup the model explanations for AutoML models\n",
|
||||||
|
"The *fitted_model* can generate the following which will be used for getting the engineered and raw feature explanations using *automl_setup_model_explanations*:-\n",
|
||||||
|
"1. Featurized data from train samples/test samples \n",
|
||||||
|
"2. Gather engineered and raw feature name lists\n",
|
||||||
|
"3. Find the classes in your labeled column in classification scenarios\n",
|
||||||
"\n",
|
"\n",
|
||||||
"shap_values, expected_values, overall_summary, overall_imp, per_class_summary, per_class_imp = \\\n",
|
"The *automl_explainer_setup_obj* contains all the structures from above list. "
|
||||||
" explain_model(fitted_model, X_train, X_test, features=features)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -323,8 +349,257 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(overall_summary)\n",
|
"from azureml.train.automl.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations\n",
|
||||||
"print(overall_imp)"
|
"\n",
|
||||||
|
"automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, \n",
|
||||||
|
" X_test=X_test, y=y_train, \n",
|
||||||
|
" task='classification')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Initialize the Mimic Explainer for feature importance\n",
|
||||||
|
"For explaining the AutoML models, use the *MimicWrapper* from *azureml.explain.model* package. The *MimicWrapper* can be initialized with fields in *automl_explainer_setup_obj*, your workspace and a LightGBM model which acts as a surrogate model to explain the AutoML model (*fitted_model* here). The *MimicWrapper* also takes the *automl_run* object where the raw and engineered explanations will be uploaded."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel\n",
|
||||||
|
"from azureml.explain.model.mimic_wrapper import MimicWrapper\n",
|
||||||
|
"explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, \n",
|
||||||
|
" init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run,\n",
|
||||||
|
" features=automl_explainer_setup_obj.engineered_feature_names, \n",
|
||||||
|
" feature_maps=[automl_explainer_setup_obj.feature_map],\n",
|
||||||
|
" classes=automl_explainer_setup_obj.classes)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Use Mimic Explainer for computing and visualizing engineered feature importance\n",
|
||||||
|
"The *explain()* method in *MimicWrapper* can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the generated engineered features by AutoML featurizers."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform)\n",
|
||||||
|
"print(engineered_explanations.get_feature_importance_dict())\n",
|
||||||
|
"from azureml.contrib.interpret.visualize import ExplanationDashboard\n",
|
||||||
|
"ExplanationDashboard(engineered_explanations, automl_explainer_setup_obj.automl_estimator, automl_explainer_setup_obj.X_test_transform)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Use Mimic Explainer for computing and visualizing raw feature importance\n",
|
||||||
|
"The *explain()* method in *MimicWrapper* can be again called with the transformed test samples and setting *get_raw* to *True* to get the feature importance for the raw features. You can also use *ExplanationDashboard* to view the dash board visualization of the feature importance values of the raw features."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"raw_explanations = explainer.explain(['local', 'global'], get_raw=True, \n",
|
||||||
|
" raw_feature_names=automl_explainer_setup_obj.raw_feature_names,\n",
|
||||||
|
" eval_dataset=automl_explainer_setup_obj.X_test_transform)\n",
|
||||||
|
"print(raw_explanations.get_feature_importance_dict())\n",
|
||||||
|
"from azureml.contrib.interpret.visualize import ExplanationDashboard\n",
|
||||||
|
"ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, automl_explainer_setup_obj.X_test_raw)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Operationailze\n",
|
||||||
|
"In this section we will show how you can operationalize an AutoML model and the explainer which was used to compute the explanations in the previous section.\n",
|
||||||
|
"\n",
|
||||||
|
"#### Register the AutoML model and the scoring explainer\n",
|
||||||
|
"We use the *TreeScoringExplainer* from *azureml.explain.model* package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. Note that, we initialize the scoring explainer with the *feature_map* that was computed previously. The *feature_map* will be used by the scoring explainer to return the raw feature importance.\n",
|
||||||
|
"\n",
|
||||||
|
"In the cell below, we pickle the scoring explainer and register the AutoML model and the scoring explainer with the Model Management Service."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.explain.model.scoring.scoring_explainer import TreeScoringExplainer, save\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize the ScoringExplainer\n",
|
||||||
|
"scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map])\n",
|
||||||
|
"\n",
|
||||||
|
"# Pickle scoring explainer locally\n",
|
||||||
|
"save(scoring_explainer, exist_ok=True)\n",
|
||||||
|
"\n",
|
||||||
|
"# Register trained automl model present in the 'outputs' folder in the artifacts\n",
|
||||||
|
"original_model = automl_run.register_model(model_name='automl_model', \n",
|
||||||
|
" model_path='outputs/model.pkl')\n",
|
||||||
|
"\n",
|
||||||
|
"# Register scoring explainer\n",
|
||||||
|
"automl_run.upload_file('scoring_explainer.pkl', 'scoring_explainer.pkl')\n",
|
||||||
|
"scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer', model_path='scoring_explainer.pkl')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Create the conda dependencies for setting up the service\n",
|
||||||
|
"We need to create the conda dependencies comprising of the *azureml-explain-model*, *azureml-train-automl* and *azureml-defaults* packages. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
||||||
|
"\n",
|
||||||
|
"azureml_pip_packages = [\n",
|
||||||
|
" 'azureml-explain-model', 'azureml-train-automl', 'azureml-defaults'\n",
|
||||||
|
"]\n",
|
||||||
|
" \n",
|
||||||
|
"\n",
|
||||||
|
"# specify CondaDependencies obj\n",
|
||||||
|
"myenv = CondaDependencies.create(conda_packages=['scikit-learn', 'pandas', 'numpy', 'py-xgboost<=0.80'],\n",
|
||||||
|
" pip_packages=azureml_pip_packages,\n",
|
||||||
|
" pin_sdk_version=True)\n",
|
||||||
|
"\n",
|
||||||
|
"with open(\"myenv.yml\",\"w\") as f:\n",
|
||||||
|
" f.write(myenv.serialize_to_string())\n",
|
||||||
|
"\n",
|
||||||
|
"with open(\"myenv.yml\",\"r\") as f:\n",
|
||||||
|
" print(f.read())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### View your scoring file"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"with open(\"score_local_explain.py\",\"r\") as f:\n",
|
||||||
|
" print(f.read())"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Deploy the service\n",
|
||||||
|
"In the cell below, we deploy the service using the conda file and the scoring file from the previous steps. "
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.webservice import Webservice\n",
|
||||||
|
"from azureml.core.model import InferenceConfig\n",
|
||||||
|
"from azureml.core.webservice import AciWebservice\n",
|
||||||
|
"from azureml.core.model import Model\n",
|
||||||
|
"\n",
|
||||||
|
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, \n",
|
||||||
|
" memory_gb=1, \n",
|
||||||
|
" tags={\"data\": \"Bank Marketing\", \n",
|
||||||
|
" \"method\" : \"local_explanation\"}, \n",
|
||||||
|
" description='Get local explanations for Bank marketing test data')\n",
|
||||||
|
"\n",
|
||||||
|
"inference_config = InferenceConfig(runtime= \"python\", \n",
|
||||||
|
" entry_script=\"score_local_explain.py\",\n",
|
||||||
|
" conda_file=\"myenv.yml\")\n",
|
||||||
|
"\n",
|
||||||
|
"# Use configs and models generated above\n",
|
||||||
|
"service = Model.deploy(ws, 'model-scoring', [scoring_explainer_model, original_model], inference_config, aciconfig)\n",
|
||||||
|
"service.wait_for_deployment(show_output=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### View the service logs"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"service.get_logs()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Inference using some test data\n",
|
||||||
|
"Inference using some test data to see the predicted value from autml model, view the engineered feature importance for the predicted value and raw feature importance for the predicted value."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"if service.state == 'Healthy':\n",
|
||||||
|
" # Serialize the first row of the test data into json\n",
|
||||||
|
" X_test_json = X_test[:1].to_json(orient='records')\n",
|
||||||
|
" print(X_test_json)\n",
|
||||||
|
" # Call the service to get the predictions and the engineered and raw explanations\n",
|
||||||
|
" output = service.run(X_test_json)\n",
|
||||||
|
" # Print the predicted value\n",
|
||||||
|
" print(output['predictions'])\n",
|
||||||
|
" # Print the engineered feature importances for the predicted value\n",
|
||||||
|
" print(output['engineered_local_importance_values'])\n",
|
||||||
|
" # Print the raw feature importances for the predicted value\n",
|
||||||
|
" print(output['raw_local_importance_values'])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"#### Delete the service\n",
|
||||||
|
"Delete the service once you have finished inferencing."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"service.delete()"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -349,7 +624,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.6.6"
|
"version": "3.6.7"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
|||||||
@@ -2,8 +2,10 @@ name: auto-ml-model-explanation
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
- azureml-widgets
|
- azureml-widgets
|
||||||
- matplotlib
|
- matplotlib
|
||||||
- pandas_ml
|
- pandas_ml
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
|
- azureml-contrib-interpret
|
||||||
|
|||||||
@@ -0,0 +1,42 @@
|
|||||||
|
import json
|
||||||
|
import numpy as np
|
||||||
|
import pandas as pd
|
||||||
|
import os
|
||||||
|
import pickle
|
||||||
|
import azureml.train.automl
|
||||||
|
import azureml.explain.model
|
||||||
|
from azureml.train.automl.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations
|
||||||
|
from sklearn.externals import joblib
|
||||||
|
from azureml.core.model import Model
|
||||||
|
|
||||||
|
|
||||||
|
def init():
|
||||||
|
|
||||||
|
global automl_model
|
||||||
|
global scoring_explainer
|
||||||
|
|
||||||
|
# Retrieve the path to the model file using the model name
|
||||||
|
# Assume original model is named original_prediction_model
|
||||||
|
automl_model_path = Model.get_model_path('automl_model')
|
||||||
|
scoring_explainer_path = Model.get_model_path('scoring_explainer')
|
||||||
|
|
||||||
|
automl_model = joblib.load(automl_model_path)
|
||||||
|
scoring_explainer = joblib.load(scoring_explainer_path)
|
||||||
|
|
||||||
|
|
||||||
|
def run(raw_data):
|
||||||
|
# Get predictions and explanations for each data point
|
||||||
|
data = pd.read_json(raw_data, orient='records')
|
||||||
|
# Make prediction
|
||||||
|
predictions = automl_model.predict(data)
|
||||||
|
# Setup for inferencing explanations
|
||||||
|
automl_explainer_setup_obj = automl_setup_model_explanations(automl_model,
|
||||||
|
X_test=data, task='classification')
|
||||||
|
# Retrieve model explanations for engineered explanations
|
||||||
|
engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform)
|
||||||
|
# Retrieve model explanations for raw explanations
|
||||||
|
raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True)
|
||||||
|
# You can return any data type as long as it is JSON-serializable
|
||||||
|
return {'predictions': predictions.tolist(),
|
||||||
|
'engineered_local_importance_values': engineered_local_importance_values,
|
||||||
|
'raw_local_importance_values': raw_local_importance_values}
|
||||||
@@ -87,9 +87,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the experiment and specify the project folder.\n",
|
"# Choose a name for the experiment.\n",
|
||||||
"experiment_name = 'automl-regression-concrete'\n",
|
"experiment_name = 'automl-regression-concrete'\n",
|
||||||
"project_folder = './sample_projects/automl-regression-concrete'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -99,7 +98,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -160,20 +158,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Data\n",
|
"# Data\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here load the data in the get_data script to be utilized in azure compute. To do this, first load all the necessary libraries and dependencies to set up paths for the data and to create the conda_run_config."
|
"Create a run configuration for the remote run."
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"if not os.path.isdir('data'):\n",
|
|
||||||
" os.mkdir('data')\n",
|
|
||||||
" \n",
|
|
||||||
"if not os.path.exists(project_folder):\n",
|
|
||||||
" os.makedirs(project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -203,7 +188,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Load Data\n",
|
"### Load Data\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here create the script to be run in azure compute for loading the data, load the concrete strength dataset into the X and y variables. Next, split the data using random_split and return X_train and y_train for training the model. Finally, return X_train and y_train for training the model."
|
"Load the concrete strength dataset into X and y. X contains the training features, which are inputs to the model. y contains the training labels, which are the expected output of the model."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -238,7 +223,6 @@
|
|||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
||||||
]
|
]
|
||||||
@@ -268,7 +252,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"automl_config = AutoMLConfig(task = 'regression',\n",
|
"automl_config = AutoMLConfig(task = 'regression',\n",
|
||||||
" debug_log = 'automl.log',\n",
|
" debug_log = 'automl.log',\n",
|
||||||
" path = project_folder,\n",
|
|
||||||
" run_configuration=conda_run_config,\n",
|
" run_configuration=conda_run_config,\n",
|
||||||
" X = X_train,\n",
|
" X = X_train,\n",
|
||||||
" y = y_train,\n",
|
" y = y_train,\n",
|
||||||
@@ -490,7 +473,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost==0.80'], pip_packages=['azureml-train-automl'])\n",
|
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost==0.80'], pip_packages=['azureml-defaults','azureml-train-automl'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"conda_env_file_name = 'myenv.yml'\n",
|
"conda_env_file_name = 'myenv.yml'\n",
|
||||||
"myenv.save_to_file('.', conda_env_file_name)"
|
"myenv.save_to_file('.', conda_env_file_name)"
|
||||||
@@ -527,45 +510,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a Container Image\n",
|
"### Deploy the model as a Web Service on Azure Container Instance"
|
||||||
"\n",
|
|
||||||
"Next use Azure Container Instances for deploying models as a web service for quickly deploying and validating your model\n",
|
|
||||||
"or when testing a model that is under development."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.image import Image, ContainerImage\n",
|
|
||||||
"\n",
|
|
||||||
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
|
|
||||||
" execution_script = script_file_name,\n",
|
|
||||||
" conda_file = conda_env_file_name,\n",
|
|
||||||
" tags = {'area': \"digits\", 'type': \"automl_regression\"},\n",
|
|
||||||
" description = \"Image for automl regression sample\")\n",
|
|
||||||
"\n",
|
|
||||||
"image = Image.create(name = \"automlsampleimage\",\n",
|
|
||||||
" # this is the model object \n",
|
|
||||||
" models = [model],\n",
|
|
||||||
" image_config = image_config, \n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"\n",
|
|
||||||
"image.wait_for_creation(show_output = True)\n",
|
|
||||||
"\n",
|
|
||||||
"if image.creation_state == 'Failed':\n",
|
|
||||||
" print(\"Image build log at: \" + image.image_build_log_uri)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Deploy the Image as a Web Service on Azure Container Instance\n",
|
|
||||||
"\n",
|
|
||||||
"Deploy an image that contains the model and other assets needed by the service."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -574,28 +519,23 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"from azureml.core.model import InferenceConfig\n",
|
||||||
"from azureml.core.webservice import AciWebservice\n",
|
"from azureml.core.webservice import AciWebservice\n",
|
||||||
|
"from azureml.core.webservice import Webservice\n",
|
||||||
|
"from azureml.core.model import Model\n",
|
||||||
|
"\n",
|
||||||
|
"inference_config = InferenceConfig(runtime = \"python\", \n",
|
||||||
|
" entry_script = script_file_name,\n",
|
||||||
|
" conda_file = conda_env_file_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||||
" memory_gb = 1, \n",
|
" memory_gb = 1, \n",
|
||||||
" tags = {'area': \"digits\", 'type': \"automl_regression\"}, \n",
|
" tags = {'area': \"digits\", 'type': \"automl_regression\"}, \n",
|
||||||
" description = 'sample service for Automl Regression')"
|
" description = 'sample service for Automl Regression')\n",
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.webservice import Webservice\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"aci_service_name = 'automl-sample-concrete'\n",
|
"aci_service_name = 'automl-sample-concrete'\n",
|
||||||
"print(aci_service_name)\n",
|
"print(aci_service_name)\n",
|
||||||
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
|
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||||
" image = image,\n",
|
|
||||||
" name = aci_service_name,\n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"aci_service.wait_for_deployment(True)\n",
|
"aci_service.wait_for_deployment(True)\n",
|
||||||
"print(aci_service.state)"
|
"print(aci_service.state)"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -2,9 +2,11 @@ name: auto-ml-regression-concrete-strength
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
- azureml-defaults
|
- azureml-defaults
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
- azureml-widgets
|
- azureml-widgets
|
||||||
- matplotlib
|
- matplotlib
|
||||||
- pandas_ml
|
- pandas_ml
|
||||||
|
- azureml-dataprep[pandas]
|
||||||
|
|||||||
@@ -87,9 +87,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the experiment and specify the project folder.\n",
|
"# Choose a name for the experiment.\n",
|
||||||
"experiment_name = 'automl-regression-hardware'\n",
|
"experiment_name = 'automl-regression-hardware'\n",
|
||||||
"project_folder = './sample_projects/automl-remote-regression'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -99,7 +98,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -160,20 +158,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Data\n",
|
"# Data\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here load the data in the get_data script to be utilized in azure compute. To do this, first load all the necessary libraries and dependencies to set up paths for the data and to create the conda_run_config."
|
"Create a run configuration for the remote run."
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"if not os.path.isdir('data'):\n",
|
|
||||||
" os.mkdir('data')\n",
|
|
||||||
" \n",
|
|
||||||
"if not os.path.exists(project_folder):\n",
|
|
||||||
" os.makedirs(project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -203,7 +188,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"### Load Data\n",
|
"### Load Data\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Here create the script to be run in azure compute for loading the data, load the hardware dataset into the X and y variables. Next split the data using random_split and return X_train and y_train for training the model."
|
"Load the hardware performance dataset into X and y. X contains the training features, which are inputs to the model. y contains the training labels, which are the expected output of the model."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -239,7 +224,6 @@
|
|||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
||||||
]
|
]
|
||||||
@@ -268,8 +252,7 @@
|
|||||||
"}\n",
|
"}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"automl_config = AutoMLConfig(task = 'regression',\n",
|
"automl_config = AutoMLConfig(task = 'regression',\n",
|
||||||
" debug_log = 'automl_errors_20190417.log',\n",
|
" debug_log = 'automl_errors.log',\n",
|
||||||
" path = project_folder,\n",
|
|
||||||
" run_configuration=conda_run_config,\n",
|
" run_configuration=conda_run_config,\n",
|
||||||
" X = X_train,\n",
|
" X = X_train,\n",
|
||||||
" y = y_train,\n",
|
" y = y_train,\n",
|
||||||
@@ -508,7 +491,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost==0.80'], pip_packages=['azureml-train-automl'])\n",
|
"myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn','py-xgboost==0.80'], pip_packages=['azureml-defaults','azureml-train-automl'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"conda_env_file_name = 'myenv.yml'\n",
|
"conda_env_file_name = 'myenv.yml'\n",
|
||||||
"myenv.save_to_file('.', conda_env_file_name)"
|
"myenv.save_to_file('.', conda_env_file_name)"
|
||||||
@@ -545,45 +528,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create a Container Image\n",
|
"### Deploy the model as a Web Service on Azure Container Instance"
|
||||||
"\n",
|
|
||||||
"Next use Azure Container Instances for deploying models as a web service for quickly deploying and validating your model\n",
|
|
||||||
"or when testing a model that is under development."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.image import Image, ContainerImage\n",
|
|
||||||
"\n",
|
|
||||||
"image_config = ContainerImage.image_configuration(runtime= \"python\",\n",
|
|
||||||
" execution_script = script_file_name,\n",
|
|
||||||
" conda_file = conda_env_file_name,\n",
|
|
||||||
" tags = {'area': \"digits\", 'type': \"automl_regression\"},\n",
|
|
||||||
" description = \"Image for automl regression sample\")\n",
|
|
||||||
"\n",
|
|
||||||
"image = Image.create(name = \"automlsampleimage\",\n",
|
|
||||||
" # this is the model object \n",
|
|
||||||
" models = [model],\n",
|
|
||||||
" image_config = image_config, \n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"\n",
|
|
||||||
"image.wait_for_creation(show_output = True)\n",
|
|
||||||
"\n",
|
|
||||||
"if image.creation_state == 'Failed':\n",
|
|
||||||
" print(\"Image build log at: \" + image.image_build_log_uri)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Deploy the Image as a Web Service on Azure Container Instance\n",
|
|
||||||
"\n",
|
|
||||||
"Deploy an image that contains the model and other assets needed by the service."
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -592,28 +537,23 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"from azureml.core.model import InferenceConfig\n",
|
||||||
"from azureml.core.webservice import AciWebservice\n",
|
"from azureml.core.webservice import AciWebservice\n",
|
||||||
|
"from azureml.core.webservice import Webservice\n",
|
||||||
|
"from azureml.core.model import Model\n",
|
||||||
|
"\n",
|
||||||
|
"inference_config = InferenceConfig(runtime = \"python\", \n",
|
||||||
|
" entry_script = script_file_name,\n",
|
||||||
|
" conda_file = conda_env_file_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||||
" memory_gb = 1, \n",
|
" memory_gb = 1, \n",
|
||||||
" tags = {'area': \"digits\", 'type': \"automl_regression\"}, \n",
|
" tags = {'area': \"digits\", 'type': \"automl_regression\"}, \n",
|
||||||
" description = 'sample service for Automl Regression')"
|
" description = 'sample service for Automl Regression')\n",
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from azureml.core.webservice import Webservice\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"aci_service_name = 'automl-sample-hardware'\n",
|
"aci_service_name = 'automl-sample-hardware'\n",
|
||||||
"print(aci_service_name)\n",
|
"print(aci_service_name)\n",
|
||||||
"aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,\n",
|
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||||
" image = image,\n",
|
|
||||||
" name = aci_service_name,\n",
|
|
||||||
" workspace = ws)\n",
|
|
||||||
"aci_service.wait_for_deployment(True)\n",
|
"aci_service.wait_for_deployment(True)\n",
|
||||||
"print(aci_service.state)"
|
"print(aci_service.state)"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -2,9 +2,11 @@ name: auto-ml-regression-hardware-performance
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
- azureml-defaults
|
- azureml-defaults
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
- azureml-widgets
|
- azureml-widgets
|
||||||
- matplotlib
|
- matplotlib
|
||||||
- pandas_ml
|
- pandas_ml
|
||||||
|
- azureml-dataprep[pandas]
|
||||||
|
|||||||
@@ -84,9 +84,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the experiment and specify the project folder.\n",
|
"# Choose a name for the experiment.\n",
|
||||||
"experiment_name = 'automl-local-regression'\n",
|
"experiment_name = 'automl-local-regression'\n",
|
||||||
"project_folder = './sample_projects/automl-local-regression'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -96,7 +95,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -144,8 +142,7 @@
|
|||||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|"
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -162,8 +159,7 @@
|
|||||||
" debug_log = 'automl.log',\n",
|
" debug_log = 'automl.log',\n",
|
||||||
" verbosity = logging.INFO,\n",
|
" verbosity = logging.INFO,\n",
|
||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train)"
|
||||||
" path = project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -93,9 +93,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the run history container in the workspace.\n",
|
"# Choose an experiment name.\n",
|
||||||
"experiment_name = 'automl-remote-amlcompute-with-onnx'\n",
|
"experiment_name = 'automl-remote-amlcompute-with-onnx'\n",
|
||||||
"project_folder = './project'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -105,7 +104,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -179,12 +177,6 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"iris = datasets.load_iris()\n",
|
"iris = datasets.load_iris()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"if not os.path.isdir('data'):\n",
|
|
||||||
" os.mkdir('data')\n",
|
|
||||||
"\n",
|
|
||||||
"if not os.path.exists(project_folder):\n",
|
|
||||||
" os.makedirs(project_folder)\n",
|
|
||||||
"\n",
|
|
||||||
"X_train, X_test, y_train, y_test = train_test_split(iris.data, \n",
|
"X_train, X_test, y_train, y_test = train_test_split(iris.data, \n",
|
||||||
" iris.target, \n",
|
" iris.target, \n",
|
||||||
" test_size=0.2, \n",
|
" test_size=0.2, \n",
|
||||||
@@ -211,6 +203,9 @@
|
|||||||
"X_test = pd.DataFrame(X_test, columns=['c1', 'c2', 'c3', 'c4'])\n",
|
"X_test = pd.DataFrame(X_test, columns=['c1', 'c2', 'c3', 'c4'])\n",
|
||||||
"y_train = pd.DataFrame(y_train, columns=['label'])\n",
|
"y_train = pd.DataFrame(y_train, columns=['label'])\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"if not os.path.isdir('data'):\n",
|
||||||
|
" os.mkdir('data')\n",
|
||||||
|
"\n",
|
||||||
"X_train.to_csv(\"data/X_train.csv\", index=False)\n",
|
"X_train.to_csv(\"data/X_train.csv\", index=False)\n",
|
||||||
"y_train.to_csv(\"data/y_train.csv\", index=False)\n",
|
"y_train.to_csv(\"data/y_train.csv\", index=False)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -264,7 +259,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Train\n",
|
"## Train\n",
|
||||||
"\n",
|
"\n",
|
||||||
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
|
"You can specify `automl_settings` as `**kwargs` as well. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"**Note:** Set the parameter enable_onnx_compatible_models=True, if you also want to generate the ONNX compatible models. Please note, the forecasting task and TensorFlow models are not ONNX compatible yet.\n",
|
"**Note:** Set the parameter enable_onnx_compatible_models=True, if you also want to generate the ONNX compatible models. Please note, the forecasting task and TensorFlow models are not ONNX compatible yet.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -276,7 +271,7 @@
|
|||||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.|\n",
|
"|**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of nodes in the AmlCompute cluster.|\n",
|
||||||
"|**enable_onnx_compatible_models**|Enable the ONNX compatible models in the experiment.|"
|
"|**enable_onnx_compatible_models**|Enable the ONNX compatible models in the experiment.|"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -305,7 +300,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||||
" debug_log = 'automl_errors.log',\n",
|
" debug_log = 'automl_errors.log',\n",
|
||||||
" path = project_folder,\n",
|
|
||||||
" run_configuration=conda_run_config,\n",
|
" run_configuration=conda_run_config,\n",
|
||||||
" X = X,\n",
|
" X = X,\n",
|
||||||
" y = y,\n",
|
" y = y,\n",
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ name: auto-ml-remote-amlcompute-with-onnx
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
- azureml-defaults
|
- azureml-defaults
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
|
|||||||
@@ -95,9 +95,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the run history container in the workspace.\n",
|
"# Choose an experiment name.\n",
|
||||||
"experiment_name = 'automl-remote-amlcompute'\n",
|
"experiment_name = 'automl-remote-amlcompute'\n",
|
||||||
"project_folder = './project'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -107,7 +106,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -183,10 +181,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"if not os.path.isdir('data'):\n",
|
"if not os.path.isdir('data'):\n",
|
||||||
" os.mkdir('data')\n",
|
" os.mkdir('data')\n",
|
||||||
" \n",
|
"\n",
|
||||||
"if not os.path.exists(project_folder):\n",
|
|
||||||
" os.makedirs(project_folder)\n",
|
|
||||||
" \n",
|
|
||||||
"pd.DataFrame(data_train.data[100:,:]).to_csv(\"data/X_train.csv\", index=False)\n",
|
"pd.DataFrame(data_train.data[100:,:]).to_csv(\"data/X_train.csv\", index=False)\n",
|
||||||
"pd.DataFrame(data_train.target[100:]).to_csv(\"data/y_train.csv\", index=False)\n",
|
"pd.DataFrame(data_train.target[100:]).to_csv(\"data/y_train.csv\", index=False)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -240,7 +235,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Train\n",
|
"## Train\n",
|
||||||
"\n",
|
"\n",
|
||||||
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
|
"You can specify `automl_settings` as `**kwargs` as well.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"**Note:** When using AmlCompute, you can't pass Numpy arrays directly to the fit method.\n",
|
"**Note:** When using AmlCompute, you can't pass Numpy arrays directly to the fit method.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -250,7 +245,7 @@
|
|||||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.|"
|
"|**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of nodes in the AmlCompute cluster.|"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -261,7 +256,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"automl_settings = {\n",
|
"automl_settings = {\n",
|
||||||
" \"iteration_timeout_minutes\": 10,\n",
|
" \"iteration_timeout_minutes\": 10,\n",
|
||||||
" \"iterations\": 20,\n",
|
" \"iterations\": 10,\n",
|
||||||
" \"n_cross_validations\": 5,\n",
|
" \"n_cross_validations\": 5,\n",
|
||||||
" \"primary_metric\": 'AUC_weighted',\n",
|
" \"primary_metric\": 'AUC_weighted',\n",
|
||||||
" \"preprocess\": False,\n",
|
" \"preprocess\": False,\n",
|
||||||
@@ -271,7 +266,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||||
" debug_log = 'automl_errors.log',\n",
|
" debug_log = 'automl_errors.log',\n",
|
||||||
" path = project_folder,\n",
|
|
||||||
" run_configuration=conda_run_config,\n",
|
" run_configuration=conda_run_config,\n",
|
||||||
" X = X,\n",
|
" X = X,\n",
|
||||||
" y = y,\n",
|
" y = y,\n",
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ name: auto-ml-remote-amlcompute
|
|||||||
dependencies:
|
dependencies:
|
||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
|
- interpret
|
||||||
- azureml-defaults
|
- azureml-defaults
|
||||||
- azureml-explain-model
|
- azureml-explain-model
|
||||||
- azureml-train-automl
|
- azureml-train-automl
|
||||||
|
|||||||
@@ -82,8 +82,6 @@
|
|||||||
"experiment_name = 'non_sample_weight_experiment'\n",
|
"experiment_name = 'non_sample_weight_experiment'\n",
|
||||||
"sample_weight_experiment_name = 'sample_weight_experiment'\n",
|
"sample_weight_experiment_name = 'sample_weight_experiment'\n",
|
||||||
"\n",
|
"\n",
|
||||||
"project_folder = './sample_projects/sample_weight'\n",
|
|
||||||
"\n",
|
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"sample_weight_experiment=Experiment(ws, sample_weight_experiment_name)\n",
|
"sample_weight_experiment=Experiment(ws, sample_weight_experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -93,7 +91,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -131,8 +128,7 @@
|
|||||||
" n_cross_validations = 2,\n",
|
" n_cross_validations = 2,\n",
|
||||||
" verbosity = logging.INFO,\n",
|
" verbosity = logging.INFO,\n",
|
||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train)\n",
|
||||||
" path = project_folder)\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"automl_sample_weight = AutoMLConfig(task = 'classification',\n",
|
"automl_sample_weight = AutoMLConfig(task = 'classification',\n",
|
||||||
" debug_log = 'automl_errors.log',\n",
|
" debug_log = 'automl_errors.log',\n",
|
||||||
@@ -143,8 +139,7 @@
|
|||||||
" verbosity = logging.INFO,\n",
|
" verbosity = logging.INFO,\n",
|
||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train,\n",
|
||||||
" sample_weight = sample_weight,\n",
|
" sample_weight = sample_weight)"
|
||||||
" path = project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -87,8 +87,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# choose a name for the experiment\n",
|
"# choose a name for the experiment\n",
|
||||||
"experiment_name = 'sparse-data-train-test-split'\n",
|
"experiment_name = 'sparse-data-train-test-split'\n",
|
||||||
"# project folder\n",
|
|
||||||
"project_folder = './sample_projects/sparse-data-train-test-split'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -98,7 +96,6 @@
|
|||||||
"output['Workspace'] = ws.name\n",
|
"output['Workspace'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||||
@@ -165,8 +162,7 @@
|
|||||||
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
|
||||||
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
"|**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
||||||
"|**X_valid**|(sparse) array-like, shape = [n_samples, n_features] for the custom validation set.|\n",
|
"|**X_valid**|(sparse) array-like, shape = [n_samples, n_features] for the custom validation set.|\n",
|
||||||
"|**y_valid**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|\n",
|
"|**y_valid**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.|"
|
||||||
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -185,8 +181,7 @@
|
|||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train,\n",
|
||||||
" X_valid = X_valid, \n",
|
" X_valid = X_valid, \n",
|
||||||
" y_valid = y_valid, \n",
|
" y_valid = y_valid)"
|
||||||
" path = project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -1,17 +0,0 @@
|
|||||||
-- This shows using the AutoMLPredict stored procedure to predict using a forecasting model for the nyc_energy dataset.
|
|
||||||
|
|
||||||
DECLARE @Model NVARCHAR(MAX) = (SELECT TOP 1 Model FROM dbo.aml_model
|
|
||||||
WHERE ExperimentName = 'automl-sql-forecast'
|
|
||||||
ORDER BY CreatedDate DESC)
|
|
||||||
|
|
||||||
EXEC dbo.AutoMLPredict @input_query='
|
|
||||||
SELECT CAST(timeStamp AS NVARCHAR(30)) AS timeStamp,
|
|
||||||
demand,
|
|
||||||
precip,
|
|
||||||
temp
|
|
||||||
FROM nyc_energy
|
|
||||||
WHERE demand IS NOT NULL AND precip IS NOT NULL AND temp IS NOT NULL
|
|
||||||
AND timeStamp >= ''2017-02-01''',
|
|
||||||
@label_column='demand',
|
|
||||||
@model=@model
|
|
||||||
WITH RESULT SETS ((timeStamp NVARCHAR(30), actual_demand FLOAT, precip FLOAT, temp FLOAT, predicted_demand FLOAT))
|
|
||||||
@@ -77,9 +77,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"ws = Workspace.from_config()\n",
|
"ws = Workspace.from_config()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Choose a name for the experiment and specify the project folder.\n",
|
"# Choose a name for the experiment.\n",
|
||||||
"experiment_name = 'automl-subsampling'\n",
|
"experiment_name = 'automl-subsampling'\n",
|
||||||
"project_folder = './sample_projects/automl-subsampling'\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"experiment = Experiment(ws, experiment_name)\n",
|
"experiment = Experiment(ws, experiment_name)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -89,7 +88,6 @@
|
|||||||
"output['Workspace Name'] = ws.name\n",
|
"output['Workspace Name'] = ws.name\n",
|
||||||
"output['Resource Group'] = ws.resource_group\n",
|
"output['Resource Group'] = ws.resource_group\n",
|
||||||
"output['Location'] = ws.location\n",
|
"output['Location'] = ws.location\n",
|
||||||
"output['Project Directory'] = project_folder\n",
|
|
||||||
"output['Experiment Name'] = experiment.name\n",
|
"output['Experiment Name'] = experiment.name\n",
|
||||||
"pd.set_option('display.max_colwidth', -1)\n",
|
"pd.set_option('display.max_colwidth', -1)\n",
|
||||||
"pd.DataFrame(data = output, index = ['']).T"
|
"pd.DataFrame(data = output, index = ['']).T"
|
||||||
@@ -150,8 +148,7 @@
|
|||||||
" verbosity = logging.INFO,\n",
|
" verbosity = logging.INFO,\n",
|
||||||
" X = X_train, \n",
|
" X = X_train, \n",
|
||||||
" y = y_train,\n",
|
" y = y_train,\n",
|
||||||
" enable_subsampling=True,\n",
|
" enable_subsampling=True)"
|
||||||
" path = project_folder)"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -170,13 +167,6 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"local_run = experiment.submit(automl_config, show_output = True)"
|
"local_run = experiment.submit(automl_config, show_output = True)"
|
||||||
]
|
]
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": []
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
|||||||
@@ -1,33 +1,73 @@
|
|||||||
Azure Databricks is a managed Spark offering on Azure and customers already use it for advanced analytics. It provides a collaborative Notebook based environment with CPU or GPU based compute cluster.
|
Azure Databricks is a managed Spark offering on Azure and customers already use it for advanced analytics. It provides a collaborative Notebook based environment with CPU or GPU based compute cluster.
|
||||||
|
|
||||||
In this section, you will find sample notebooks on how to use Azure Machine Learning SDK with Azure Databricks. You can train a model using Spark MLlib and then deploy the model to ACI/AKS from within Azure Databricks. You can also use Automated ML capability (**public preview**) of Azure ML SDK with Azure Databricks.
|
In this section, you will find sample notebooks on how to use Azure Machine Learning SDK with Azure Databricks. You can train a model using Spark MLlib and then deploy the model to ACI/AKS from within Azure Databricks. You can also use Automated ML capability (**public preview**) of Azure ML SDK with Azure Databricks.
|
||||||
|
|
||||||
- Customers who use Azure Databricks for advanced analytics can now use the same cluster to run experiments with or without automated machine learning.
|
- Customers who use Azure Databricks for advanced analytics can now use the same cluster to run experiments with or without automated machine learning.
|
||||||
- You can keep the data within the same cluster.
|
- You can keep the data within the same cluster.
|
||||||
- You can leverage the local worker nodes with autoscale and auto termination capabilities.
|
- You can leverage the local worker nodes with autoscale and auto termination capabilities.
|
||||||
- You can use multiple cores of your Azure Databricks cluster to perform simultenous training.
|
- You can use multiple cores of your Azure Databricks cluster to perform simultenous training.
|
||||||
- You can further tune the model generated by automated machine learning if you chose to.
|
- You can further tune the model generated by automated machine learning if you chose to.
|
||||||
- Every run (including the best run) is available as a pipeline, which you can tune further if needed.
|
- Every run (including the best run) is available as a pipeline, which you can tune further if needed.
|
||||||
- The model trained using Azure Databricks can be registered in Azure ML SDK workspace and then deployed to Azure managed compute (ACI or AKS) using the Azure Machine learning SDK.
|
- The model trained using Azure Databricks can be registered in Azure ML SDK workspace and then deployed to Azure managed compute (ACI or AKS) using the Azure Machine learning SDK.
|
||||||
|
|
||||||
Please follow our [Azure doc](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#azure-databricks) to install the sdk in your Azure Databricks cluster before trying any of the sample notebooks.
|
Please follow our [Azure doc](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#azure-databricks) to install the sdk in your Azure Databricks cluster before trying any of the sample notebooks.
|
||||||
|
|
||||||
**Single file** -
|
**Single file** -
|
||||||
The following archive contains all the sample notebooks. You can the run notebooks after importing [DBC](Databricks_AMLSDK_1-4_6.dbc) in your Databricks workspace instead of downloading individually.
|
The following archive contains all the sample notebooks. You can the run notebooks after importing [DBC](Databricks_AMLSDK_1-4_6.dbc) in your Databricks workspace instead of downloading individually.
|
||||||
|
|
||||||
Notebooks 1-4 have to be run sequentially & are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks.
|
Notebooks 1-4 have to be run sequentially & are related to Income prediction experiment based on this [dataset](https://archive.ics.uci.edu/ml/datasets/adult) and demonstrate how to data prep, train and operationalize a Spark ML model with Azure ML Python SDK from within Azure Databricks.
|
||||||
|
|
||||||
Notebook 6 is an Automated ML sample notebook for Classification.
|
Notebook 6 is an Automated ML sample notebook for Classification.
|
||||||
|
|
||||||
Learn more about [how to use Azure Databricks as a development environment](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment#azure-databricks) for Azure Machine Learning service.
|
Learn more about [how to use Azure Databricks as a development environment](https://docs.microsoft.com/azure/machine-learning/service/how-to-configure-environment#azure-databricks) for Azure Machine Learning service.
|
||||||
|
|
||||||
**Databricks as a Compute Target from AML Pipelines**
|
**Databricks as a Compute Target from Azure ML Pipelines**
|
||||||
You can use Azure Databricks as a compute target from [Azure Machine Learning Pipelines](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines). Take a look at this notebook for details: [aml-pipelines-use-databricks-as-compute-target.ipynb](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/databricks-as-remote-compute-target/aml-pipelines-use-databricks-as-compute-target.ipynb).
|
You can use Azure Databricks as a compute target from [Azure Machine Learning Pipelines](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines). Take a look at this notebook for details: [aml-pipelines-use-databricks-as-compute-target.ipynb](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/databricks-as-remote-compute-target/aml-pipelines-use-databricks-as-compute-target.ipynb).
|
||||||
|
|
||||||
|
# Linked Azure Databricks and Azure Machine Learning Workspaces (Preview)
|
||||||
|
Customers can now link Azure Databricks and AzureML Workspaces to better enable cross-Azure ML scenarios by [managing their tracking data in a single place when using the MLflow client](https://mlflow.org/docs/latest/tracking.html#mlflow-tracking) - the Azure ML workspace.
|
||||||
|
|
||||||
|
## Linking the Workspaces (Admin operation)
|
||||||
|
|
||||||
|
1. The Azure Databricks Azure portal blade now includes a new button to link an Azure ML workspace.
|
||||||
|

|
||||||
|
2. Both a new or existing Azure ML Workspace can be linked in the resulting prompt. Follow any instructions to set up the Azure ML Workspace.
|
||||||
|

|
||||||
|
3. After a successful link operation, you should see the Azure Databricks overview reflect the linked status
|
||||||
|

|
||||||
|
|
||||||
|
## Configure MLflow to send data to Azure ML (All roles)
|
||||||
|
|
||||||
|
1. Add azureml-mlflow as a library to any notebook or cluster that should send data to Azure ML. You can do this via:
|
||||||
|
1. [DBUtils](https://docs.azuredatabricks.net/user-guide/dev-tools/dbutils.html#dbutils-library)
|
||||||
|
```
|
||||||
|
dbutils.library.installPyPI("azureml-mlflow")
|
||||||
|
dbutils.library.restartPython() # Removes Python state
|
||||||
|
```
|
||||||
|
2. [Cluster Libraries](https://docs.azuredatabricks.net/user-guide/libraries.html#install-a-library-on-a-cluster)
|
||||||
|

|
||||||
|
2. [Set the MLflow tracking URI](https://mlflow.org/docs/latest/tracking.html#where-runs-are-recorded) to the following scheme:
|
||||||
|
```
|
||||||
|
adbazureml://${azuremlRegion}.experiments.azureml.net/history/v1.0/subscriptions/${azuremlSubscriptionId}/resourceGroups/${azuremlResourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/${azuremlWorkspaceName}
|
||||||
|
```
|
||||||
|
1. You can automatically configure this on your clusters for all subsequent notebook sessions using this helper script instead of manually setting the tracking URI in the notebook:
|
||||||
|
* [AzureML Tracking Cluster Init Script](./linking/README.md)
|
||||||
|
3. If configured correctly, you'll now be able to see your MLflow tracking data in both Azure ML (via the REST API and all clients) and Azure Databricks (in the MLflow UI and using the MLflow client)
|
||||||
|
|
||||||
|
|
||||||
|
## Known Preview Limitations
|
||||||
|
While we roll this experience out to customers for feedback, there are some known limitations we'd love comments on in addition to any other issues seen in your workflow.
|
||||||
|
### 1-to-1 Workspace linking
|
||||||
|
Currently, an Azure ML Workspace can only be linked to one Azure Databricks Workspace at a time.
|
||||||
|
### Data synchronization
|
||||||
|
At the moment, data is only generated in the Azure Machine Learning workspace for tracking. Editing tags via the Azure Databricks MLflow UI won't be reflected in the Azure ML UI.
|
||||||
|
### Java and R support
|
||||||
|
The experience currently is only available from the Python MLflow client.
|
||||||
|
|
||||||
For more on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks).
|
For more on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks).
|
||||||
|
|
||||||
**Please let us know your feedback.**
|
**Please let us know your feedback.**
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||

|
|
||||||
|

|
||||||
|
|||||||
@@ -11,13 +11,6 @@
|
|||||||
"Licensed under the MIT License."
|
"Licensed under the MIT License."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
""
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
|||||||