Compare commits

...

33 Commits

Author SHA1 Message Date
amlrelsa-ms
b0aa91acce update samples from Release-140 as a part of SDK release 2022-05-04 23:01:56 +00:00
Harneet Virk
5928ba83bb Merge pull request #1748 from Azure/release_update/Release-138
update samples from Release-138 as a part of  SDK release
2022-04-29 10:40:01 -07:00
amlrelsa-ms
ffa3a43979 update samples from Release-138 as a part of SDK release 2022-04-29 17:09:13 +00:00
Harneet Virk
7ce79a43f1 Merge pull request #1746 from Azure/release_update/Release-137
update samples from Release-137 as a part of  SDK release
2022-04-27 11:50:44 -07:00
amlrelsa-ms
edcc50ab0c update samples from Release-137 as a part of SDK release 2022-04-27 17:59:44 +00:00
Harneet Virk
4a391522d0 Merge pull request #1742 from Azure/release_update/Release-136
update samples from Release-136 as a part of  SDK release
2022-04-25 13:16:03 -07:00
amlrelsa-ms
1903f78285 update samples from Release-136 as a part of SDK release 2022-04-25 17:08:42 +00:00
Harneet Virk
a4dfcc4693 Merge pull request #1730 from Azure/release_update/Release-135
update samples from Release-135 as a part of  SDK release
2022-04-04 14:47:18 -07:00
amlrelsa-ms
faffb3fef7 update samples from Release-135 as a part of SDK release 2022-04-04 20:15:29 +00:00
Harneet Virk
6c6227c403 Merge pull request #1729 from rezasherafat/rl_notebook_update
add docker subfolder to pong notebook directly.
2022-03-30 16:05:10 -07:00
Reza Sherafat
e3be364e7a add docker subfolder to pong notebook directly. 2022-03-30 22:47:50 +00:00
Harneet Virk
90e20a60e9 Merge pull request #1726 from Azure/release_update/Release-131
update samples from Release-131 as a part of  SDK release
2022-03-29 19:32:11 -07:00
amlrelsa-ms
33a4eacf1d update samples from Release-131 as a part of SDK release 2022-03-30 02:26:53 +00:00
Harneet Virk
e30b53fddc Merge pull request #1725 from Azure/release_update/Release-130
update samples from Release-130 as a part of  SDK release
2022-03-29 15:41:28 -07:00
amlrelsa-ms
95b0392ed2 update samples from Release-130 as a part of SDK release 2022-03-29 22:33:38 +00:00
Harneet Virk
796798cb49 Merge pull request #1724 from Azure/release_update/Release-129
update samples from Release-129 as a part of  1.40.0 SDK release
2022-03-29 12:18:30 -07:00
amlrelsa-ms
08b0ba7854 update samples from Release-129 as a part of SDK release 2022-03-29 18:28:35 +00:00
Harneet Virk
ceaf82acc6 Merge pull request #1720 from Azure/release_update/Release-128
update samples from Release-128 as a part of  SDK release
2022-03-21 17:56:06 -07:00
amlrelsa-ms
dadc93cfe5 update samples from Release-128 as a part of SDK release 2022-03-22 00:51:19 +00:00
Harneet Virk
c7076bf95c Merge pull request #1715 from Azure/release_update/Release-127
update samples from Release-127 as a part of  SDK release
2022-03-15 17:02:41 -07:00
amlrelsa-ms
ebdffd5626 update samples from Release-127 as a part of SDK release 2022-03-16 00:00:00 +00:00
Harneet Virk
d123880562 Merge pull request #1711 from Azure/release_update/Release-126
update samples from Release-126 as a part of  SDK release
2022-03-11 16:53:06 -08:00
amlrelsa-ms
4864e8ea60 update samples from Release-126 as a part of SDK release 2022-03-12 00:47:46 +00:00
Harneet Virk
c86db0d7fd Merge pull request #1707 from Azure/release_update/Release-124
update samples from Release-124 as a part of  SDK release
2022-03-08 09:15:45 -08:00
amlrelsa-ms
ccfbbb3b14 update samples from Release-124 as a part of SDK release 2022-03-08 00:37:35 +00:00
Harneet Virk
c42ba64b15 Merge pull request #1700 from Azure/release_update/Release-123
update samples from Release-123 as a part of  SDK release
2022-03-01 16:33:02 -08:00
amlrelsa-ms
6d8bf32243 update samples from Release-123 as a part of SDK release 2022-02-28 17:20:57 +00:00
Harneet Virk
9094da4085 Merge pull request #1684 from Azure/release_update/Release-122
update samples from Release-122 as a part of  SDK release
2022-02-14 11:38:49 -08:00
amlrelsa-ms
ebf9d2855c update samples from Release-122 as a part of SDK release 2022-02-14 19:24:27 +00:00
v-pbavanari
1bbd78eb33 update samples from Release-121 as a part of SDK release (#1678)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2022-02-02 12:28:49 -05:00
v-pbavanari
77f5a69e04 update samples from Release-120 as a part of SDK release (#1676)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2022-01-28 12:51:49 -05:00
raja7592
ce82af2ab0 update samples from Release-118 as a part of SDK release (#1673)
Co-authored-by: amlrelsa-ms <amlrelsa@microsoft.com>
2022-01-24 20:07:35 -05:00
Harneet Virk
2a2d2efa17 Merge pull request #1658 from Azure/release_update/Release-117
Update samples from Release sdk 1.37.0 as a part of  SDK release
2021-12-13 10:36:08 -08:00
100 changed files with 5329 additions and 2550 deletions

View File

@@ -103,7 +103,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.41.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -188,13 +188,6 @@
"### Script to process data and train model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The _process&#95;data.py_ script used in the step below is a slightly modified implementation of [RAPIDS Mortgage E2E example](https://github.com/rapidsai/notebooks-contrib/blob/master/intermediate_notebooks/E2E/mortgage/mortgage_e2e.ipynb)."
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -373,7 +366,7 @@
"run_config.target = gpu_cluster_name\n",
"run_config.environment.docker.enabled = True\n",
"run_config.environment.docker.gpu_support = True\n",
"run_config.environment.docker.base_image = \"mcr.microsoft.com/azureml/base-gpu:intelmpi2018.3-cuda10.0-cudnn7-ubuntu16.04\"\n",
"run_config.environment.docker.base_image = \"mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.1-cudnn8-ubuntu20.04\"\n",
"run_config.environment.spark.precache_packages = False\n",
"run_config.data_references={'data':data_ref.to_config()}"
]

View File

@@ -49,7 +49,7 @@
"* `fairlearn>=0.6.2` (pre-v0.5.0 will work with minor modifications)\n",
"* `joblib`\n",
"* `liac-arff`\n",
"* `raiwidgets~=0.7.0`\n",
"* `raiwidgets`\n",
"\n",
"Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:"
]

View File

@@ -6,4 +6,6 @@ dependencies:
- fairlearn>=0.6.2
- joblib
- liac-arff
- raiwidgets~=0.15.0
- raiwidgets~=0.17.0
- itsdangerous==2.0.1
- markupsafe<2.1.0

View File

@@ -51,7 +51,7 @@
"* `fairlearn>=0.6.2` (also works for pre-v0.5.0 with slight modifications)\n",
"* `joblib`\n",
"* `liac-arff`\n",
"* `raiwidgets~=0.7.0`\n",
"* `raiwidgets`\n",
"\n",
"Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:"
]

View File

@@ -6,4 +6,6 @@ dependencies:
- fairlearn>=0.6.2
- joblib
- liac-arff
- raiwidgets~=0.15.0
- raiwidgets~=0.17.0
- itsdangerous==2.0.1
- markupsafe<2.1.0

View File

@@ -1,29 +1,31 @@
name: azure_automl
channels:
- conda-forge
- pytorch
- main
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- pip==21.1.2
- python>=3.5.2,<3.8
- boto3==1.15.18
- matplotlib==2.1.0
- numpy==1.18.5
- cython
- urllib3<1.24
- scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1
- pandas==0.25.1
- py-xgboost<=0.90
- conda-forge::fbprophet==0.5
- holidays==0.9.11
# Currently Azure ML only supports 3.6.0 and later.
- pip==20.2.4
- python>=3.6,<3.9
- matplotlib==3.2.1
- py-xgboost==1.3.3
- pytorch::pytorch=1.4.0
- conda-forge::fbprophet==0.7.1
- cudatoolkit=10.1.243
- tornado==6.1.0
- scipy==1.5.2
- notebook
- pywin32==227
- PySocks==1.7.1
- Pygments==2.11.2
- conda-forge::pyqt==5.12.3
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets~=1.37.0
- azureml-widgets~=1.41.0
- pytorch-transformers==1.0.0
- spacy==2.1.8
- spacy==2.2.4
- pystan==2.19.1.1
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlsdkdataresources.blob.core.windows.net/validated-requirements/1.37.0/validated_win32_requirements.txt [--no-deps]
- -r https://automlsdkdataresources.blob.core.windows.net/validated-requirements/1.41.0/validated_win32_requirements.txt [--no-deps]
- arch==4.14

View File

@@ -1,30 +1,33 @@
name: azure_automl
channels:
- conda-forge
- pytorch
- main
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- pip==21.1.2
- python>=3.5.2,<3.8
- nb_conda
- boto3==1.15.18
- matplotlib==2.1.0
- numpy==1.18.5
- cython
- urllib3<1.24
# Currently Azure ML only supports 3.6.0 and later.
- pip==20.2.4
- python>=3.6,<3.9
- boto3==1.20.19
- botocore<=1.23.19
- matplotlib==3.2.1
- numpy==1.19.5
- cython==0.29.14
- urllib3==1.26.7
- scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1
- pandas==0.25.1
- py-xgboost<=0.90
- conda-forge::fbprophet==0.5
- holidays==0.9.11
- py-xgboost<=1.3.3
- holidays==0.10.3
- conda-forge::fbprophet==0.7.1
- pytorch::pytorch=1.4.0
- cudatoolkit=10.1.243
- tornado==6.1.0
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets~=1.37.0
- azureml-widgets~=1.41.0
- pytorch-transformers==1.0.0
- spacy==2.1.8
- spacy==2.2.4
- pystan==2.19.1.1
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlsdkdataresources.blob.core.windows.net/validated-requirements/1.37.0/validated_linux_requirements.txt [--no-deps]
- -r https://automlsdkdataresources.blob.core.windows.net/validated-requirements/1.41.0/validated_linux_requirements.txt [--no-deps]
- arch==4.14

View File

@@ -1,31 +1,34 @@
name: azure_automl
channels:
- conda-forge
- pytorch
- main
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- pip==21.1.2
# Currently Azure ML only supports 3.6.0 and later.
- pip==20.2.4
- nomkl
- python>=3.5.2,<3.8
- nb_conda
- boto3==1.15.18
- matplotlib==2.1.0
- numpy==1.18.5
- cython
- urllib3<1.24
- python>=3.6,<3.9
- boto3==1.20.19
- botocore<=1.23.19
- matplotlib==3.2.1
- numpy==1.19.5
- cython==0.29.14
- urllib3==1.26.7
- scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1
- pandas==0.25.1
- py-xgboost<=0.90
- conda-forge::fbprophet==0.5
- holidays==0.9.11
- py-xgboost<=1.3.3
- holidays==0.10.3
- conda-forge::fbprophet==0.7.1
- pytorch::pytorch=1.4.0
- cudatoolkit=9.0
- tornado==6.1.0
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets~=1.37.0
- azureml-widgets~=1.41.0
- pytorch-transformers==1.0.0
- spacy==2.1.8
- spacy==2.2.4
- pystan==2.19.1.1
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlsdkdataresources.blob.core.windows.net/validated-requirements/1.37.0/validated_darwin_requirements.txt [--no-deps]
- -r https://automlsdkdataresources.blob.core.windows.net/validated-requirements/1.41.0/validated_darwin_requirements.txt [--no-deps]
- arch==4.14

View File

@@ -1,21 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -30,6 +14,7 @@
"1. [Results](#Results)\n",
"1. [Deploy](#Deploy)\n",
"1. [Test](#Test)\n",
"1. [Use auto-generated code for retraining](#Using-the-auto-generated-model-training-code-for-retraining-on-new-data)\n",
"1. [Acknowledgements](#Acknowledgements)"
]
},
@@ -55,6 +40,7 @@
"7. Create a container image.\n",
"8. Create an Azure Container Instance (ACI) service.\n",
"9. Test the ACI service.\n",
"10. Leverage the auto generated training code and use it for retraining on an updated dataset\n",
"\n",
"In addition this notebook showcases the following features\n",
"- **Blocking** certain pipelines\n",
@@ -74,7 +60,9 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "automl-import"
},
"outputs": [],
"source": [
"import json\n",
@@ -99,16 +87,6 @@
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -138,24 +116,27 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "ws-setup"
},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-classification-bmarketing-all'\n",
"experiment_name = \"automl-classification-bmarketing-all\"\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Experiment Name\"] = experiment.name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
@@ -176,7 +157,9 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
@@ -188,12 +171,12 @@
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n",
" max_nodes=6)\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n",
" )\n",
" compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
@@ -226,7 +209,9 @@
"metadata": {},
"outputs": [],
"source": [
"data = pd.read_csv(\"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv\")\n",
"data = pd.read_csv(\n",
" \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv\"\n",
")\n",
"data.head()"
]
},
@@ -241,7 +226,12 @@
"\n",
"missing_rate = 0.75\n",
"n_missing_samples = int(np.floor(data.shape[0] * missing_rate))\n",
"missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))\n",
"missing_samples = np.hstack(\n",
" (\n",
" np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool),\n",
" np.ones(n_missing_samples, dtype=np.bool),\n",
" )\n",
")\n",
"rng = np.random.RandomState(0)\n",
"rng.shuffle(missing_samples)\n",
"missing_features = rng.randint(0, data.shape[1], n_missing_samples)\n",
@@ -254,19 +244,21 @@
"metadata": {},
"outputs": [],
"source": [
"if not os.path.isdir('data'):\n",
" os.mkdir('data')\n",
" \n",
"if not os.path.isdir(\"data\"):\n",
" os.mkdir(\"data\")\n",
"# Save the train data to a csv to be uploaded to the datastore\n",
"pd.DataFrame(data).to_csv(\"data/train_data.csv\", index=False)\n",
"\n",
"ds = ws.get_default_datastore()\n",
"ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)\n",
"\n",
"ds.upload(\n",
" src_dir=\"./data\", target_path=\"bankmarketing\", overwrite=True, show_progress=True\n",
")\n",
"\n",
"\n",
"# Upload the training data as a tabular dataset for access during training on remote compute\n",
"train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))\n",
"train_data = Dataset.Tabular.from_delimited_files(\n",
" path=ds.path(\"bankmarketing/train_data.csv\")\n",
")\n",
"label = \"y\""
]
},
@@ -326,6 +318,7 @@
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**training_data**|Input dataset, containing both features and label column.|\n",
"|**label_column_name**|The name of the label column.|\n",
"|**enable_code_generation**|Flag to enable generation of training code for each of the models that AutoML is creating.\n",
"\n",
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
]
@@ -343,27 +336,31 @@
" \"max_concurrent_iterations\": 4,\n",
" \"max_cores_per_iteration\": -1,\n",
" # \"n_cross_validations\": 2,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"featurization\": 'auto',\n",
" \"primary_metric\": \"AUC_weighted\",\n",
" \"featurization\": \"auto\",\n",
" \"verbosity\": logging.INFO,\n",
" \"enable_code_generation\": True,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
"automl_config = AutoMLConfig(\n",
" task=\"classification\",\n",
" debug_log=\"automl_errors.log\",\n",
" compute_target=compute_target,\n",
" experiment_exit_score=0.9984,\n",
" blocked_models = ['KNN','LinearSVM'],\n",
" blocked_models=[\"KNN\", \"LinearSVM\"],\n",
" enable_onnx_compatible_models=True,\n",
" training_data=train_data,\n",
" label_column_name=label,\n",
" validation_data=validation_dataset,\n",
" **automl_settings\n",
" **automl_settings,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"tags": []
},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous."
]
@@ -371,7 +368,9 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "experiment-submit"
},
"outputs": [],
"source": [
"remote_run = experiment.submit(automl_config, show_output=False)"
@@ -379,7 +378,9 @@
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"tags": []
},
"source": [
"Run the following cell to access previous runs. Uncomment the cell below and update the run_id."
]
@@ -430,8 +431,10 @@
"metadata": {},
"outputs": [],
"source": [
"# Download the featuurization summary JSON file locally\n",
"best_run.download_file(\"outputs/featurization_summary.json\", \"featurization_summary.json\")\n",
"# Download the featurization summary JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n",
")\n",
"\n",
"# Render the JSON as a pandas DataFrame\n",
"with open(\"featurization_summary.json\", \"r\") as f:\n",
@@ -450,10 +453,13 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "run-details"
},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(remote_run).show()"
]
},
@@ -473,9 +479,12 @@
"source": [
"# Wait for the best model explanation run to complete\n",
"from azureml.core.run import Run\n",
"\n",
"model_explainability_run_id = remote_run.id + \"_\" + \"ModelExplain\"\n",
"print(model_explainability_run_id)\n",
"model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)\n",
"model_explainability_run = Run(\n",
" experiment=experiment, run_id=model_explainability_run_id\n",
")\n",
"model_explainability_run.wait_for_completion()\n",
"\n",
"# Get the best run object\n",
@@ -556,6 +565,7 @@
"outputs": [],
"source": [
"from azureml.automl.runtime.onnx_convert import OnnxConverter\n",
"\n",
"onnx_fl_path = \"./best_model.onnx\"\n",
"OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)"
]
@@ -580,13 +590,17 @@
"\n",
"from azureml.automl.runtime.onnx_convert import OnnxInferenceHelper\n",
"\n",
"\n",
"def get_onnx_res(run):\n",
" res_path = 'onnx_resource.json'\n",
" run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)\n",
" res_path = \"onnx_resource.json\"\n",
" run.download_file(\n",
" name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path\n",
" )\n",
" with open(res_path) as f:\n",
" result = json.load(f)\n",
" return result\n",
"\n",
"\n",
"if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:\n",
" test_df = test_dataset.to_pandas_dataframe()\n",
" mdl_bytes = onnx_mdl.SerializeToString()\n",
@@ -598,7 +612,7 @@
" print(pred_onnx)\n",
" print(pred_prob_onnx)\n",
"else:\n",
" print('Please use Python version 3.6 or 3.7 to run the inference helper.')"
" print(\"Please use Python version 3.6 or 3.7 to run the inference helper.\")"
]
},
{
@@ -609,7 +623,7 @@
"\n",
"### Retrieve the Best Model\n",
"\n",
"Below we select the best pipeline from our iterations. The `get_best_child` method returns the Run object for the best model based on the default primary metric. There are additional flags that can be passed to the method if we want to retrieve the best Run based on any of the other supported metrics, or if we are just interested in the best run among the ONNX compatible runs. As always, you can execute `remote_run.get_best_child??` in a new cell to view the source or docs for the function."
"Below we select the best pipeline from our iterations. The `get_best_child` method returns the Run object for the best model based on the default primary metric. There are additional flags that can be passed to the method if we want to retrieve the best Run based on any of the other supported metrics, or if we are just interested in the best run among the ONNX compatible runs. As always, you can execute `??remote_run.get_best_child` in a new cell to view the source or docs for the function."
]
},
{
@@ -618,7 +632,7 @@
"metadata": {},
"outputs": [],
"source": [
"remote_run.get_best_child??"
"??remote_run.get_best_child"
]
},
{
@@ -647,11 +661,11 @@
"metadata": {},
"outputs": [],
"source": [
"model_name = best_run.properties['model_name']\n",
"model_name = best_run.properties[\"model_name\"]\n",
"\n",
"script_file_name = 'inference/score.py'\n",
"script_file_name = \"inference/score.py\"\n",
"\n",
"best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')"
"best_run.download_file(\"outputs/scoring_file_v_1_0_0.py\", \"inference/score.py\")"
]
},
{
@@ -668,11 +682,15 @@
"metadata": {},
"outputs": [],
"source": [
"description = 'AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit'\n",
"description = \"AutoML Model trained on bank marketing data to predict if a client will subscribe to a term deposit\"\n",
"tags = None\n",
"model = remote_run.register_model(model_name = model_name, description = description, tags = tags)\n",
"model = remote_run.register_model(\n",
" model_name=model_name, description=description, tags=tags\n",
")\n",
"\n",
"print(remote_run.model_id) # This will be written to the script file later in the notebook."
"print(\n",
" remote_run.model_id\n",
") # This will be written to the script file later in the notebook."
]
},
{
@@ -690,16 +708,20 @@
"source": [
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.webservice import AciWebservice\n",
"from azureml.core.webservice import Webservice\n",
"from azureml.core.model import Model\n",
"from azureml.core.environment import Environment\n",
"\n",
"inference_config = InferenceConfig(environment = best_run.get_environment(), entry_script=script_file_name)\n",
"inference_config = InferenceConfig(entry_script=script_file_name)\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 2, \n",
"aciconfig = AciWebservice.deploy_configuration(\n",
" cpu_cores=2,\n",
" memory_gb=2,\n",
" tags = {'area': \"bmData\", 'type': \"automl_classification\"}, \n",
" description = 'sample service for Automl Classification')\n",
" tags={\"area\": \"bmData\", \"type\": \"automl_classification\"},\n",
" description=\"sample service for Automl Classification\",\n",
")\n",
"\n",
"aci_service_name = 'automl-sample-bankmarketing-all'\n",
"aci_service_name = model_name.lower()\n",
"print(aci_service_name)\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
"aci_service.wait_for_deployment(True)\n",
@@ -751,8 +773,8 @@
"metadata": {},
"outputs": [],
"source": [
"X_test = test_dataset.drop_columns(columns=['y'])\n",
"y_test = test_dataset.keep_columns(columns=['y'], validate=True)\n",
"X_test = test_dataset.drop_columns(columns=[\"y\"])\n",
"y_test = test_dataset.keep_columns(columns=[\"y\"], validate=True)\n",
"test_dataset.take(5).to_pandas_dataframe()"
]
},
@@ -774,13 +796,13 @@
"source": [
"import requests\n",
"\n",
"X_test_json = X_test.to_json(orient='records')\n",
"data = \"{\\\"data\\\": \" + X_test_json +\"}\"\n",
"headers = {'Content-Type': 'application/json'}\n",
"X_test_json = X_test.to_json(orient=\"records\")\n",
"data = '{\"data\": ' + X_test_json + \"}\"\n",
"headers = {\"Content-Type\": \"application/json\"}\n",
"\n",
"resp = requests.post(aci_service.scoring_uri, data, headers=headers)\n",
"\n",
"y_pred = json.loads(json.loads(resp.text))['result']"
"y_pred = json.loads(json.loads(resp.text))[\"result\"]"
]
},
{
@@ -806,7 +828,9 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"%matplotlib notebook\n",
@@ -814,19 +838,25 @@
"import itertools\n",
"\n",
"cf = confusion_matrix(actual, y_pred)\n",
"plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')\n",
"plt.imshow(cf, cmap=plt.cm.Blues, interpolation=\"nearest\")\n",
"plt.colorbar()\n",
"plt.title('Confusion Matrix')\n",
"plt.xlabel('Predicted')\n",
"plt.ylabel('Actual')\n",
"class_labels = ['no','yes']\n",
"plt.title(\"Confusion Matrix\")\n",
"plt.xlabel(\"Predicted\")\n",
"plt.ylabel(\"Actual\")\n",
"class_labels = [\"no\", \"yes\"]\n",
"tick_marks = np.arange(len(class_labels))\n",
"plt.xticks(tick_marks, class_labels)\n",
"plt.yticks([-0.5,0,1,1.5],['','no','yes',''])\n",
"plt.yticks([-0.5, 0, 1, 1.5], [\"\", \"no\", \"yes\", \"\"])\n",
"# plotting text value inside cells\n",
"thresh = cf.max() / 2.\n",
"thresh = cf.max() / 2.0\n",
"for i, j in itertools.product(range(cf.shape[0]), range(cf.shape[1])):\n",
" plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')\n",
" plt.text(\n",
" j,\n",
" i,\n",
" format(cf[i, j], \"d\"),\n",
" horizontalalignment=\"center\",\n",
" color=\"white\" if cf[i, j] > thresh else \"black\",\n",
" )\n",
"plt.show()"
]
},
@@ -848,6 +878,142 @@
"aci_service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using the auto generated model training code for retraining on new data\n",
"\n",
"Because we enabled code generation when the original experiment was created, we now have access to the code that was used to generate any of the AutoML tried models. Below we'll be using the generated training script of the best model to retrain on a new dataset.\n",
"\n",
"For this demo, we'll begin by creating new retraining dataset by combining the Train & Validation datasets that were used in the original experiment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"original_train_data = pd.read_csv(\n",
" \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv\"\n",
")\n",
"\n",
"valid_data = pd.read_csv(\n",
" \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv\"\n",
")\n",
"\n",
"# we'll emulate an updated dataset for retraining by combining the Train & Validation datasets into a new one\n",
"retrain_pd = pd.concat([original_train_data, valid_data])\n",
"retrain_pd.to_csv(\"data/retrain_data.csv\", index=False)\n",
"ds.upload_files(\n",
" files=[\"data/retrain_data.csv\"],\n",
" target_path=\"bankmarketing/\",\n",
" overwrite=True,\n",
" show_progress=True,\n",
")\n",
"retrain_dataset = Dataset.Tabular.from_delimited_files(\n",
" path=ds.path(\"bankmarketing/retrain_data.csv\")\n",
")\n",
"\n",
"# after creating and uploading the retraining dataset, let's register it with the workspace for reuse\n",
"retrain_dataset = retrain_dataset.register(\n",
" workspace=ws,\n",
" name=\"Bankmarketing_retrain\",\n",
" description=\"Updated training dataset, includes validation data\",\n",
" create_new_version=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll download the generated script for the best run and use it for retraining. For more advanced scenarios, you can customize the training script as you need: change the featurization pipeline, change the learner algorithm or its hyperparameters, etc. \n",
"\n",
"For this exercise, we'll leave the script as it was generated."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# download the autogenerated training script into the generated_code folder\n",
"best_run.download_file(\n",
" \"outputs/generated_code/script.py\", \"generated_code/training_script.py\"\n",
")\n",
"\n",
"# view the contents of the autogenerated training script\n",
"! cat generated_code/training_script.py"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import uuid\n",
"from azureml.core import ScriptRunConfig\n",
"from azureml._restclient.models import RunTypeV2\n",
"from azureml._restclient.models.create_run_dto import CreateRunDto\n",
"from azureml._restclient.run_client import RunClient\n",
"\n",
"codegen_runid = str(uuid.uuid4())\n",
"client = RunClient(\n",
" experiment.workspace.service_context,\n",
" experiment.name,\n",
" codegen_runid,\n",
" experiment_id=experiment.id,\n",
")\n",
"\n",
"# override the training_dataset_id to point to our new retraining dataset we just registered above\n",
"dataset_arguments = [\"--training_dataset_id\", retrain_dataset.id]\n",
"\n",
"# create the retraining run as a child of the AutoML generated training run\n",
"create_run_dto = CreateRunDto(\n",
" run_id=codegen_runid,\n",
" parent_run_id=best_run.id,\n",
" description=\"AutoML Codegen Script Run using an updated training dataset\",\n",
" target=cpu_cluster_name,\n",
" run_type_v2=RunTypeV2(orchestrator=\"Execution\", traits=[\"automl-codegen\"]),\n",
")\n",
"\n",
"# the script for retraining run is pointing to the AutoML generated script\n",
"src = ScriptRunConfig(\n",
" source_directory=\"generated_code\",\n",
" script=\"training_script.py\",\n",
" arguments=dataset_arguments,\n",
" compute_target=cpu_cluster_name,\n",
" environment=best_run.get_environment(),\n",
")\n",
"run_dto = client.create_run(run_id=codegen_runid, create_run_dto=create_run_dto)\n",
"\n",
"# submit the experiment\n",
"retraining_run = experiment.submit(config=src, run_id=codegen_runid)\n",
"retraining_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After the run completes, we can get download/test/deploy to the model it has built."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"retraining_run.wait_for_completion()\n",
"\n",
"retraining_run.download_file(\"outputs/model.pkl\", \"generated_code/model.pkl\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -890,6 +1056,9 @@
],
"friendly_name": "Automated ML run with basic edition features.",
"index_order": 5,
"kernel_info": {
"name": "python3-azureml"
},
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -905,7 +1074,10 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
"version": "3.6.9"
},
"nteract": {
"version": "nteract-front-end@1.0.0"
},
"tags": [
"featurization",
@@ -916,5 +1088,5 @@
"task": "Classification"
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 1
}

View File

@@ -1,21 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -87,16 +71,6 @@
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -106,18 +80,19 @@
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-classification-ccard-remote'\n",
"experiment_name = \"automl-classification-ccard-remote\"\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Experiment Name\"] = experiment.name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
@@ -150,12 +125,12 @@
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n",
" max_nodes=6)\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n",
" )\n",
" compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
@@ -178,13 +153,15 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "load-data"
},
"outputs": [],
"source": [
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n",
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
"training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n",
"label_column_name = 'Class'"
"label_column_name = \"Class\""
]
},
{
@@ -210,24 +187,27 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "automl-config"
},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"n_cross_validations\": 3,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"primary_metric\": \"average_precision_score_weighted\",\n",
" \"enable_early_stopping\": True,\n",
" \"max_concurrent_iterations\": 2, # This is a limit for testing purpose, please increase it as per cluster size\n",
" \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ablity to find the best model possible\n",
" \"verbosity\": logging.INFO,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
"automl_config = AutoMLConfig(\n",
" task=\"classification\",\n",
" debug_log=\"automl_errors.log\",\n",
" compute_target=compute_target,\n",
" training_data=training_data,\n",
" label_column_name=label_column_name,\n",
" **automl_settings\n",
" **automl_settings,\n",
")"
]
},
@@ -287,6 +267,7 @@
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(remote_run).show()"
]
},
@@ -353,8 +334,12 @@
"outputs": [],
"source": [
"# convert the test data to dataframe\n",
"X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe()\n",
"y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe()"
"X_test_df = validation_data.drop_columns(\n",
" columns=[label_column_name]\n",
").to_pandas_dataframe()\n",
"y_test_df = validation_data.keep_columns(\n",
" columns=[label_column_name], validate=True\n",
").to_pandas_dataframe()"
]
},
{
@@ -389,19 +374,25 @@
"import itertools\n",
"\n",
"cf = confusion_matrix(y_test_df.values, y_pred)\n",
"plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')\n",
"plt.imshow(cf, cmap=plt.cm.Blues, interpolation=\"nearest\")\n",
"plt.colorbar()\n",
"plt.title('Confusion Matrix')\n",
"plt.xlabel('Predicted')\n",
"plt.ylabel('Actual')\n",
"class_labels = ['False','True']\n",
"plt.title(\"Confusion Matrix\")\n",
"plt.xlabel(\"Predicted\")\n",
"plt.ylabel(\"Actual\")\n",
"class_labels = [\"False\", \"True\"]\n",
"tick_marks = np.arange(len(class_labels))\n",
"plt.xticks(tick_marks, class_labels)\n",
"plt.yticks([-0.5,0,1,1.5],['','False','True',''])\n",
"plt.yticks([-0.5, 0, 1, 1.5], [\"\", \"False\", \"True\", \"\"])\n",
"# plotting text value inside cells\n",
"thresh = cf.max() / 2.\n",
"thresh = cf.max() / 2.0\n",
"for i, j in itertools.product(range(cf.shape[0]), range(cf.shape[1])):\n",
" plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')\n",
" plt.text(\n",
" j,\n",
" i,\n",
" format(cf[i, j], \"d\"),\n",
" horizontalalignment=\"center\",\n",
" color=\"white\" if cf[i, j] > thresh else \"black\",\n",
" )\n",
"plt.show()"
]
},

View File

@@ -1,21 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -91,16 +75,6 @@
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -117,18 +91,19 @@
"ws = Workspace.from_config()\n",
"\n",
"# Choose an experiment name.\n",
"experiment_name = 'automl-classification-text-dnn'\n",
"experiment_name = \"automl-classification-text-dnn\"\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace Name\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Experiment Name\"] = experiment.name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
@@ -161,13 +136,16 @@
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_NC6\", # CPU for BiLSTM, such as \"STANDARD_DS12_V2\" \n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_NC6\", # CPU for BiLSTM, such as \"STANDARD_D2_V2\"\n",
" # To use BERT (this is recommended for best performance), select a GPU such as \"STANDARD_NC6\"\n",
" # or similar GPU option\n",
" # available in your workspace\n",
" max_nodes = num_nodes)\n",
" idle_seconds_before_scaledown=60,\n",
" max_nodes=num_nodes,\n",
" )\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
@@ -189,38 +167,52 @@
"source": [
"data_dir = \"text-dnn-data\" # Local directory to store data\n",
"blobstore_datadir = data_dir # Blob store directory to store data in\n",
"target_column_name = 'y'\n",
"feature_column_name = 'X'\n",
"target_column_name = \"y\"\n",
"feature_column_name = \"X\"\n",
"\n",
"\n",
"def get_20newsgroups_data():\n",
" '''Fetches 20 Newsgroups data from scikit-learn\n",
" \"\"\"Fetches 20 Newsgroups data from scikit-learn\n",
" Returns them in form of pandas dataframes\n",
" '''\n",
" remove = ('headers', 'footers', 'quotes')\n",
" \"\"\"\n",
" remove = (\"headers\", \"footers\", \"quotes\")\n",
" categories = [\n",
" 'rec.sport.baseball',\n",
" 'rec.sport.hockey',\n",
" 'comp.graphics',\n",
" 'sci.space',\n",
" \"rec.sport.baseball\",\n",
" \"rec.sport.hockey\",\n",
" \"comp.graphics\",\n",
" \"sci.space\",\n",
" ]\n",
"\n",
" data = fetch_20newsgroups(subset = 'train', categories = categories,\n",
" shuffle = True, random_state = 42,\n",
" remove = remove)\n",
" data = pd.DataFrame({feature_column_name: data.data, target_column_name: data.target})\n",
" data = fetch_20newsgroups(\n",
" subset=\"train\",\n",
" categories=categories,\n",
" shuffle=True,\n",
" random_state=42,\n",
" remove=remove,\n",
" )\n",
" data = pd.DataFrame(\n",
" {feature_column_name: data.data, target_column_name: data.target}\n",
" )\n",
"\n",
" data_train = data[:200]\n",
" data_test = data[200:300]\n",
"\n",
" data_train = remove_blanks_20news(data_train, feature_column_name, target_column_name)\n",
" data_train = remove_blanks_20news(\n",
" data_train, feature_column_name, target_column_name\n",
" )\n",
" data_test = remove_blanks_20news(data_test, feature_column_name, target_column_name)\n",
"\n",
" return data_train, data_test\n",
"\n",
"\n",
"def remove_blanks_20news(data, feature_column_name, target_column_name):\n",
"\n",
" data[feature_column_name] = data[feature_column_name].replace(r'\\n', ' ', regex=True).apply(lambda x: x.strip())\n",
" data = data[data[feature_column_name] != '']\n",
" data[feature_column_name] = (\n",
" data[feature_column_name]\n",
" .replace(r\"\\n\", \" \", regex=True)\n",
" .apply(lambda x: x.strip())\n",
" )\n",
" data = data[data[feature_column_name] != \"\"]\n",
"\n",
" return data"
]
@@ -243,15 +235,14 @@
"if not os.path.isdir(data_dir):\n",
" os.mkdir(data_dir)\n",
"\n",
"train_data_fname = data_dir + '/train_data.csv'\n",
"test_data_fname = data_dir + '/test_data.csv'\n",
"train_data_fname = data_dir + \"/train_data.csv\"\n",
"test_data_fname = data_dir + \"/test_data.csv\"\n",
"\n",
"data_train.to_csv(train_data_fname, index=False)\n",
"data_test.to_csv(test_data_fname, index=False)\n",
"\n",
"datastore = ws.get_default_datastore()\n",
"datastore.upload(src_dir=data_dir, target_path=blobstore_datadir,\n",
" overwrite=True)"
"datastore.upload(src_dir=data_dir, target_path=blobstore_datadir, overwrite=True)"
]
},
{
@@ -260,7 +251,9 @@
"metadata": {},
"outputs": [],
"source": [
"train_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/train_data.csv')])"
"train_dataset = Dataset.Tabular.from_delimited_files(\n",
" path=[(datastore, blobstore_datadir + \"/train_data.csv\")]\n",
")"
]
},
{
@@ -285,7 +278,7 @@
"source": [
"automl_settings = {\n",
" \"experiment_timeout_minutes\": 30,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"primary_metric\": \"accuracy\",\n",
" \"max_concurrent_iterations\": num_nodes,\n",
" \"max_cores_per_iteration\": -1,\n",
" \"enable_dnn\": True,\n",
@@ -296,13 +289,14 @@
" \"enable_stack_ensemble\": False,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
"automl_config = AutoMLConfig(\n",
" task=\"classification\",\n",
" debug_log=\"automl_errors.log\",\n",
" compute_target=compute_target,\n",
" training_data=train_dataset,\n",
" label_column_name=target_column_name,\n",
" blocked_models = ['LightGBM', 'XGBoostClassifier'],\n",
" **automl_settings\n",
" blocked_models=[\"LightGBM\", \"XGBoostClassifier\"],\n",
" **automl_settings,\n",
")"
]
},
@@ -342,8 +336,7 @@
"metadata": {},
"source": [
"For local inferencing, you can load the model locally via. the method `remote_run.get_output()`. For more information on the arguments expected by this method, you can run `remote_run.get_output??`.\n",
"Note that when the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your MachineLearningNotebooks folder here:\n",
"MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl_env.yml\n"
"Note that when the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your azureml-examples folder here: \"azureml-examples/python-sdk/tutorials/automl-with-azureml\""
]
},
{
@@ -369,15 +362,17 @@
"metadata": {},
"outputs": [],
"source": [
"# Download the featuurization summary JSON file locally\n",
"best_run.download_file(\"outputs/featurization_summary.json\", \"featurization_summary.json\")\n",
"# Download the featurization summary JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n",
")\n",
"\n",
"# Render the JSON as a pandas DataFrame\n",
"with open(\"featurization_summary.json\", \"r\") as f:\n",
" records = json.load(f)\n",
"\n",
"featurization_summary = pd.DataFrame.from_records(records)\n",
"featurization_summary['Transformations'].tolist()"
"featurization_summary[\"Transformations\"].tolist()"
]
},
{
@@ -402,7 +397,7 @@
"outputs": [],
"source": [
"summary_df = get_result_df(automl_run)\n",
"best_dnn_run_id = summary_df['run_id'].iloc[0]\n",
"best_dnn_run_id = summary_df[\"run_id\"].iloc[0]\n",
"best_dnn_run = Run(experiment, best_dnn_run_id)"
]
},
@@ -412,11 +407,11 @@
"metadata": {},
"outputs": [],
"source": [
"model_dir = 'Model' # Local folder where the model will be stored temporarily\n",
"model_dir = \"Model\" # Local folder where the model will be stored temporarily\n",
"if not os.path.isdir(model_dir):\n",
" os.mkdir(model_dir)\n",
"\n",
"best_dnn_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')"
"best_dnn_run.download_file(\"outputs/model.pkl\", model_dir + \"/model.pkl\")"
]
},
{
@@ -433,11 +428,10 @@
"outputs": [],
"source": [
"# Register the model\n",
"model_name = 'textDNN-20News'\n",
"model = Model.register(model_path = model_dir + '/model.pkl',\n",
" model_name = model_name,\n",
" tags=None,\n",
" workspace=ws)"
"model_name = \"textDNN-20News\"\n",
"model = Model.register(\n",
" model_path=model_dir + \"/model.pkl\", model_name=model_name, tags=None, workspace=ws\n",
")"
]
},
{
@@ -462,7 +456,9 @@
"metadata": {},
"outputs": [],
"source": [
"test_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/test_data.csv')])\n",
"test_dataset = Dataset.Tabular.from_delimited_files(\n",
" path=[(datastore, blobstore_datadir + \"/test_data.csv\")]\n",
")\n",
"\n",
"# preview the first 3 rows of the dataset\n",
"test_dataset.take(3).to_pandas_dataframe()"
@@ -483,9 +479,9 @@
"metadata": {},
"outputs": [],
"source": [
"script_folder = os.path.join(os.getcwd(), 'inference')\n",
"script_folder = os.path.join(os.getcwd(), \"inference\")\n",
"os.makedirs(script_folder, exist_ok=True)\n",
"shutil.copy('infer.py', script_folder)"
"shutil.copy(\"infer.py\", script_folder)"
]
},
{
@@ -494,8 +490,15 @@
"metadata": {},
"outputs": [],
"source": [
"test_run = run_inference(test_experiment, compute_target, script_folder, best_dnn_run,\n",
" test_dataset, target_column_name, model_name)"
"test_run = run_inference(\n",
" test_experiment,\n",
" compute_target,\n",
" script_folder,\n",
" best_dnn_run,\n",
" test_dataset,\n",
" target_column_name,\n",
" model_name,\n",
")"
]
},
{

View File

@@ -4,52 +4,65 @@ from azureml.train.estimator import Estimator
from azureml.core.run import Run
def run_inference(test_experiment, compute_target, script_folder, train_run,
test_dataset, target_column_name, model_name):
def run_inference(
test_experiment,
compute_target,
script_folder,
train_run,
test_dataset,
target_column_name,
model_name,
):
inference_env = train_run.get_environment()
est = Estimator(source_directory=script_folder,
entry_script='infer.py',
est = Estimator(
source_directory=script_folder,
entry_script="infer.py",
script_params={
'--target_column_name': target_column_name,
'--model_name': model_name
"--target_column_name": target_column_name,
"--model_name": model_name,
},
inputs=[
test_dataset.as_named_input('test_data')
],
inputs=[test_dataset.as_named_input("test_data")],
compute_target=compute_target,
environment_definition=inference_env)
environment_definition=inference_env,
)
run = test_experiment.submit(
est, tags={
'training_run_id': train_run.id,
'run_algorithm': train_run.properties['run_algorithm'],
'valid_score': train_run.properties['score'],
'primary_metric': train_run.properties['primary_metric']
})
est,
tags={
"training_run_id": train_run.id,
"run_algorithm": train_run.properties["run_algorithm"],
"valid_score": train_run.properties["score"],
"primary_metric": train_run.properties["primary_metric"],
},
)
run.log("run_algorithm", run.tags['run_algorithm'])
run.log("run_algorithm", run.tags["run_algorithm"])
return run
def get_result_df(remote_run):
children = list(remote_run.get_children(recursive=True))
summary_df = pd.DataFrame(index=['run_id', 'run_algorithm',
'primary_metric', 'Score'])
summary_df = pd.DataFrame(
index=["run_id", "run_algorithm", "primary_metric", "Score"]
)
goal_minimize = False
for run in children:
if('run_algorithm' in run.properties and 'score' in run.properties):
summary_df[run.id] = [run.id, run.properties['run_algorithm'],
run.properties['primary_metric'],
float(run.properties['score'])]
if('goal' in run.properties):
goal_minimize = run.properties['goal'].split('_')[-1] == 'min'
if "run_algorithm" in run.properties and "score" in run.properties:
summary_df[run.id] = [
run.id,
run.properties["run_algorithm"],
run.properties["primary_metric"],
float(run.properties["score"]),
]
if "goal" in run.properties:
goal_minimize = run.properties["goal"].split("_")[-1] == "min"
summary_df = summary_df.T.sort_values(
'Score',
ascending=goal_minimize).drop_duplicates(['run_algorithm'])
summary_df = summary_df.set_index('run_algorithm')
"Score", ascending=goal_minimize
).drop_duplicates(["run_algorithm"])
summary_df = summary_df.set_index("run_algorithm")
return summary_df

View File

@@ -12,19 +12,22 @@ from azureml.core.model import Model
parser = argparse.ArgumentParser()
parser.add_argument(
'--target_column_name', type=str, dest='target_column_name',
help='Target Column Name')
"--target_column_name",
type=str,
dest="target_column_name",
help="Target Column Name",
)
parser.add_argument(
'--model_name', type=str, dest='model_name',
help='Name of registered model')
"--model_name", type=str, dest="model_name", help="Name of registered model"
)
args = parser.parse_args()
target_column_name = args.target_column_name
model_name = args.model_name
print('args passed are: ')
print('Target column name: ', target_column_name)
print('Name of registered model: ', model_name)
print("args passed are: ")
print("Target column name: ", target_column_name)
print("Name of registered model: ", model_name)
model_path = Model.get_model_path(model_name)
# deserialize the model file back into a sklearn model
@@ -32,13 +35,16 @@ model = joblib.load(model_path)
run = Run.get_context()
# get input dataset by name
test_dataset = run.input_datasets['test_data']
test_dataset = run.input_datasets["test_data"]
X_test_df = test_dataset.drop_columns(columns=[target_column_name]) \
.to_pandas_dataframe()
y_test_df = test_dataset.with_timestamp_columns(None) \
.keep_columns(columns=[target_column_name]) \
X_test_df = test_dataset.drop_columns(
columns=[target_column_name]
).to_pandas_dataframe()
y_test_df = (
test_dataset.with_timestamp_columns(None)
.keep_columns(columns=[target_column_name])
.to_pandas_dataframe()
)
predicted = model.predict_proba(X_test_df)
@@ -47,11 +53,13 @@ if isinstance(predicted, pd.DataFrame):
# Use the AutoML scoring module
train_labels = model.classes_
class_labels = np.unique(np.concatenate((y_test_df.values, np.reshape(train_labels, (-1, 1)))))
class_labels = np.unique(
np.concatenate((y_test_df.values, np.reshape(train_labels, (-1, 1))))
)
classification_metrics = list(constants.CLASSIFICATION_SCALAR_SET)
scores = scoring.score_classification(y_test_df.values, predicted,
classification_metrics,
class_labels, train_labels)
scores = scoring.score_classification(
y_test_df.values, predicted, classification_metrics, class_labels, train_labels
)
print("scores:")
print(scores)

View File

@@ -1,20 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/continous-retraining/auto-ml-continuous-retraining.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -75,16 +60,6 @@
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -118,17 +93,18 @@
"dstor = ws.get_default_datastore()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = 'retrain-noaaweather'\n",
"experiment_name = \"retrain-noaaweather\"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Run History Name'] = experiment_name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
@@ -164,12 +140,12 @@
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n",
" max_nodes=4)\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n",
" )\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
@@ -196,12 +172,19 @@
"\n",
"conda_run_config.environment.docker.enabled = True\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]', 'applicationinsights', 'azureml-opendatasets', 'azureml-defaults'], \n",
" conda_packages=['numpy==1.16.2'], \n",
" pin_sdk_version=False)\n",
"cd = CondaDependencies.create(\n",
" pip_packages=[\n",
" \"azureml-sdk[automl]\",\n",
" \"applicationinsights\",\n",
" \"azureml-opendatasets\",\n",
" \"azureml-defaults\",\n",
" ],\n",
" conda_packages=[\"numpy==1.16.2\"],\n",
" pin_sdk_version=False,\n",
")\n",
"conda_run_config.environment.python.conda_dependencies = cd\n",
"\n",
"print('run config is ready')"
"print(\"run config is ready\")"
]
},
{
@@ -242,12 +225,14 @@
"from azureml.pipeline.steps import PythonScriptStep\n",
"\n",
"ds_name = PipelineParameter(name=\"ds_name\", default_value=dataset)\n",
"upload_data_step = PythonScriptStep(script_name=\"upload_weather_data.py\", \n",
"upload_data_step = PythonScriptStep(\n",
" script_name=\"upload_weather_data.py\",\n",
" allow_reuse=False,\n",
" name=\"upload_weather_data\",\n",
" arguments=[\"--ds_name\", ds_name],\n",
" compute_target=compute_target,\n",
" runconfig=conda_run_config)"
" runconfig=conda_run_config,\n",
")"
]
},
{
@@ -264,10 +249,11 @@
"outputs": [],
"source": [
"data_pipeline = Pipeline(\n",
" description=\"pipeline_with_uploaddata\",\n",
" workspace=ws, \n",
" steps=[upload_data_step])\n",
"data_pipeline_run = experiment.submit(data_pipeline, pipeline_parameters={\"ds_name\":dataset})"
" description=\"pipeline_with_uploaddata\", workspace=ws, steps=[upload_data_step]\n",
")\n",
"data_pipeline_run = experiment.submit(\n",
" data_pipeline, pipeline_parameters={\"ds_name\": dataset}\n",
")"
]
},
{
@@ -307,13 +293,14 @@
"metadata": {},
"outputs": [],
"source": [
"data_prep_step = PythonScriptStep(script_name=\"check_data.py\", \n",
"data_prep_step = PythonScriptStep(\n",
" script_name=\"check_data.py\",\n",
" allow_reuse=False,\n",
" name=\"check_data\",\n",
" arguments=[\"--ds_name\", ds_name,\n",
" \"--model_name\", model_name],\n",
" arguments=[\"--ds_name\", ds_name, \"--model_name\", model_name],\n",
" compute_target=compute_target,\n",
" runconfig=conda_run_config)"
" runconfig=conda_run_config,\n",
")"
]
},
{
@@ -323,6 +310,7 @@
"outputs": [],
"source": [
"from azureml.core import Dataset\n",
"\n",
"train_ds = Dataset.get_by_name(ws, dataset)\n",
"train_ds = train_ds.drop_columns([\"partition_date\"])"
]
@@ -348,20 +336,21 @@
" \"iteration_timeout_minutes\": 10,\n",
" \"experiment_timeout_hours\": 0.25,\n",
" \"n_cross_validations\": 3,\n",
" \"primary_metric\": 'normalized_root_mean_squared_error',\n",
" \"primary_metric\": \"r2_score\",\n",
" \"max_concurrent_iterations\": 3,\n",
" \"max_cores_per_iteration\": -1,\n",
" \"verbosity\": logging.INFO,\n",
" \"enable_early_stopping\": True\n",
" \"enable_early_stopping\": True,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'regression',\n",
" debug_log = 'automl_errors.log',\n",
"automl_config = AutoMLConfig(\n",
" task=\"regression\",\n",
" debug_log=\"automl_errors.log\",\n",
" path=\".\",\n",
" compute_target=compute_target,\n",
" training_data=train_ds,\n",
" label_column_name=target_column_name,\n",
" **automl_settings\n",
" **automl_settings,\n",
")"
]
},
@@ -373,17 +362,21 @@
"source": [
"from azureml.pipeline.core import PipelineData, TrainingOutput\n",
"\n",
"metrics_output_name = 'metrics_output'\n",
"best_model_output_name = 'best_model_output'\n",
"metrics_output_name = \"metrics_output\"\n",
"best_model_output_name = \"best_model_output\"\n",
"\n",
"metrics_data = PipelineData(name='metrics_data',\n",
"metrics_data = PipelineData(\n",
" name=\"metrics_data\",\n",
" datastore=dstor,\n",
" pipeline_output_name=metrics_output_name,\n",
" training_output=TrainingOutput(type='Metrics'))\n",
"model_data = PipelineData(name='model_data',\n",
" training_output=TrainingOutput(type=\"Metrics\"),\n",
")\n",
"model_data = PipelineData(\n",
" name=\"model_data\",\n",
" datastore=dstor,\n",
" pipeline_output_name=best_model_output_name,\n",
" training_output=TrainingOutput(type='Model'))"
" training_output=TrainingOutput(type=\"Model\"),\n",
")"
]
},
{
@@ -393,10 +386,11 @@
"outputs": [],
"source": [
"automl_step = AutoMLStep(\n",
" name='automl_module',\n",
" name=\"automl_module\",\n",
" automl_config=automl_config,\n",
" outputs=[metrics_data, model_data],\n",
" allow_reuse=False)"
" allow_reuse=False,\n",
")"
]
},
{
@@ -413,13 +407,22 @@
"metadata": {},
"outputs": [],
"source": [
"register_model_step = PythonScriptStep(script_name=\"register_model.py\",\n",
"register_model_step = PythonScriptStep(\n",
" script_name=\"register_model.py\",\n",
" name=\"register_model\",\n",
" allow_reuse=False,\n",
" arguments=[\"--model_name\", model_name, \"--model_path\", model_data, \"--ds_name\", ds_name],\n",
" arguments=[\n",
" \"--model_name\",\n",
" model_name,\n",
" \"--model_path\",\n",
" model_data,\n",
" \"--ds_name\",\n",
" ds_name,\n",
" ],\n",
" inputs=[model_data],\n",
" compute_target=compute_target,\n",
" runconfig=conda_run_config)"
" runconfig=conda_run_config,\n",
")"
]
},
{
@@ -438,7 +441,8 @@
"training_pipeline = Pipeline(\n",
" description=\"training_pipeline\",\n",
" workspace=ws,\n",
" steps=[data_prep_step, automl_step, register_model_step])"
" steps=[data_prep_step, automl_step, register_model_step],\n",
")"
]
},
{
@@ -447,8 +451,10 @@
"metadata": {},
"outputs": [],
"source": [
"training_pipeline_run = experiment.submit(training_pipeline, pipeline_parameters={\n",
" \"ds_name\": dataset, \"model_name\": \"noaaweatherds\"})"
"training_pipeline_run = experiment.submit(\n",
" training_pipeline,\n",
" pipeline_parameters={\"ds_name\": dataset, \"model_name\": \"noaaweatherds\"},\n",
")"
]
},
{
@@ -477,8 +483,8 @@
"pipeline_name = \"Retraining-Pipeline-NOAAWeather\"\n",
"\n",
"published_pipeline = training_pipeline.publish(\n",
" name=pipeline_name, \n",
" description=\"Pipeline that retrains AutoML model\")\n",
" name=pipeline_name, description=\"Pipeline that retrains AutoML model\"\n",
")\n",
"\n",
"published_pipeline"
]
@@ -490,13 +496,17 @@
"outputs": [],
"source": [
"from azureml.pipeline.core import Schedule\n",
"schedule = Schedule.create(workspace=ws, name=\"RetrainingSchedule\",\n",
"\n",
"schedule = Schedule.create(\n",
" workspace=ws,\n",
" name=\"RetrainingSchedule\",\n",
" pipeline_parameters={\"ds_name\": dataset, \"model_name\": \"noaaweatherds\"},\n",
" pipeline_id=published_pipeline.id,\n",
" experiment_name=experiment_name,\n",
" datastore=dstor,\n",
" wait_for_provisioning=True,\n",
" polling_interval=1440)"
" polling_interval=1440,\n",
")"
]
},
{
@@ -520,8 +530,8 @@
"pipeline_name = \"DataIngestion-Pipeline-NOAAWeather\"\n",
"\n",
"published_pipeline = training_pipeline.publish(\n",
" name=pipeline_name, \n",
" description=\"Pipeline that updates NOAAWeather Dataset\")\n",
" name=pipeline_name, description=\"Pipeline that updates NOAAWeather Dataset\"\n",
")\n",
"\n",
"published_pipeline"
]
@@ -533,13 +543,17 @@
"outputs": [],
"source": [
"from azureml.pipeline.core import Schedule\n",
"schedule = Schedule.create(workspace=ws, name=\"RetrainingSchedule-DataIngestion\",\n",
"\n",
"schedule = Schedule.create(\n",
" workspace=ws,\n",
" name=\"RetrainingSchedule-DataIngestion\",\n",
" pipeline_parameters={\"ds_name\": dataset},\n",
" pipeline_id=published_pipeline.id,\n",
" experiment_name=experiment_name,\n",
" datastore=dstor,\n",
" wait_for_provisioning=True,\n",
" polling_interval=1440)"
" polling_interval=1440,\n",
")"
]
}
],

View File

@@ -31,7 +31,7 @@ try:
model = Model(ws, args.model_name)
last_train_time = model.created_time
print("Model was last trained on {0}.".format(last_train_time))
except Exception:
except Exception as e:
print("Could not get last model train time.")
last_train_time = datetime.min.replace(tzinfo=pytz.UTC)

View File

@@ -25,9 +25,11 @@ datasets = [(Dataset.Scenario.TRAINING, train_ds)]
# Register model with training dataset
model = Model.register(workspace=ws,
model = Model.register(
workspace=ws,
model_path=args.model_path,
model_name=args.model_name,
datasets=datasets)
datasets=datasets,
)
print("Registered version {0} of model {1}".format(model.version, model.name))

View File

@@ -16,26 +16,82 @@ if type(run) == _OfflineRun:
else:
ws = run.experiment.workspace
usaf_list = ['725724', '722149', '723090', '722159', '723910', '720279',
'725513', '725254', '726430', '720381', '723074', '726682',
'725486', '727883', '723177', '722075', '723086', '724053',
'725070', '722073', '726060', '725224', '725260', '724520',
'720305', '724020', '726510', '725126', '722523', '703333',
'722249', '722728', '725483', '722972', '724975', '742079',
'727468', '722193', '725624', '722030', '726380', '720309',
'722071', '720326', '725415', '724504', '725665', '725424',
'725066']
usaf_list = [
"725724",
"722149",
"723090",
"722159",
"723910",
"720279",
"725513",
"725254",
"726430",
"720381",
"723074",
"726682",
"725486",
"727883",
"723177",
"722075",
"723086",
"724053",
"725070",
"722073",
"726060",
"725224",
"725260",
"724520",
"720305",
"724020",
"726510",
"725126",
"722523",
"703333",
"722249",
"722728",
"725483",
"722972",
"724975",
"742079",
"727468",
"722193",
"725624",
"722030",
"726380",
"720309",
"722071",
"720326",
"725415",
"724504",
"725665",
"725424",
"725066",
]
def get_noaa_data(start_time, end_time):
columns = ['usaf', 'wban', 'datetime', 'latitude', 'longitude', 'elevation',
'windAngle', 'windSpeed', 'temperature', 'stationName', 'p_k']
columns = [
"usaf",
"wban",
"datetime",
"latitude",
"longitude",
"elevation",
"windAngle",
"windSpeed",
"temperature",
"stationName",
"p_k",
]
isd = NoaaIsdWeather(start_time, end_time, cols=columns)
noaa_df = isd.to_pandas_dataframe()
df_filtered = noaa_df[noaa_df["usaf"].isin(usaf_list)]
df_filtered.reset_index(drop=True)
print("Received {0} rows of training data between {1} and {2}".format(
df_filtered.shape[0], start_time, end_time))
print(
"Received {0} rows of training data between {1} and {2}".format(
df_filtered.shape[0], start_time, end_time
)
)
return df_filtered
@@ -54,11 +110,12 @@ end_time = datetime.utcnow()
try:
ds = Dataset.get_by_name(ws, args.ds_name)
end_time_last_slice = ds.data_changed_time.replace(tzinfo=None)
print("Dataset {0} last updated on {1}".format(args.ds_name,
end_time_last_slice))
print("Dataset {0} last updated on {1}".format(args.ds_name, end_time_last_slice))
except Exception:
print(traceback.format_exc())
print("Dataset with name {0} not found, registering new dataset.".format(args.ds_name))
print(
"Dataset with name {0} not found, registering new dataset.".format(args.ds_name)
)
register_dataset = True
end_time = datetime(2021, 5, 1, 0, 0)
end_time_last_slice = end_time - relativedelta(weeks=2)
@@ -66,26 +123,35 @@ except Exception:
train_df = get_noaa_data(end_time_last_slice, end_time)
if train_df.size > 0:
print("Received {0} rows of new data after {1}.".format(
train_df.shape[0], end_time_last_slice))
folder_name = "{}/{:04d}/{:02d}/{:02d}/{:02d}/{:02d}/{:02d}".format(args.ds_name, end_time.year,
end_time.month, end_time.day,
end_time.hour, end_time.minute,
end_time.second)
print(
"Received {0} rows of new data after {1}.".format(
train_df.shape[0], end_time_last_slice
)
)
folder_name = "{}/{:04d}/{:02d}/{:02d}/{:02d}/{:02d}/{:02d}".format(
args.ds_name,
end_time.year,
end_time.month,
end_time.day,
end_time.hour,
end_time.minute,
end_time.second,
)
file_path = "{0}/data.csv".format(folder_name)
# Add a new partition to the registered dataset
os.makedirs(folder_name, exist_ok=True)
train_df.to_csv(file_path, index=False)
dstor.upload_files(files=[file_path],
target_path=folder_name,
overwrite=True,
show_progress=True)
dstor.upload_files(
files=[file_path], target_path=folder_name, overwrite=True, show_progress=True
)
else:
print("No new data since {0}.".format(end_time_last_slice))
if register_dataset:
ds = Dataset.Tabular.from_delimited_files(dstor.path("{}/**/*.csv".format(
args.ds_name)), partition_format='/{partition_date:yyyy/MM/dd/HH/mm/ss}/data.csv')
ds = Dataset.Tabular.from_delimited_files(
dstor.path("{}/**/*.csv".format(args.ds_name)),
partition_format="/{partition_date:yyyy/MM/dd/HH/mm/ss}/data.csv",
)
ds.register(ws, name=args.ds_name)

View File

@@ -1,17 +1,19 @@
name: azure_automl_experimental
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- pip<=19.3.1
- python>=3.5.2,<3.8
- nb_conda
- cython
- urllib3<1.24
# Currently Azure ML only supports 3.6.0 and later.
- pip<=20.2.4
- python>=3.6.0,<3.9
- cython==0.29.14
- urllib3==1.26.7
- PyJWT < 2.0.0
- numpy==1.18.5
- pywin32==227
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azure-core==1.21.1
- azure-identity==1.7.0
- azureml-defaults
- azureml-sdk
- azureml-widgets

View File

@@ -1,18 +1,21 @@
name: azure_automl_experimental
channels:
- conda-forge
- main
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- pip<=19.3.1
# Currently Azure ML only supports 3.6.0 and later.
- pip<=20.2.4
- nomkl
- python>=3.5.2,<3.8
- nb_conda
- cython
- urllib3<1.24
- python>=3.6.0,<3.9
- urllib3==1.26.7
- PyJWT < 2.0.0
- numpy==1.18.5
- numpy==1.19.5
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azure-core==1.21.1
- azure-identity==1.7.0
- azureml-defaults
- azureml-sdk
- azureml-widgets

View File

@@ -92,7 +92,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.41.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -115,7 +115,7 @@
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.set_option('display.max_colwidth', None)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"outputDf.T"
]

View File

@@ -91,7 +91,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.41.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -180,6 +180,29 @@
"label = \"ERP\"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The split data will be used in the remote compute by ModelProxy and locally to compare results.\n",
"So, we need to persist the split data to avoid descrepencies from different package versions in the local and remote."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ds = ws.get_default_datastore()\n",
"\n",
"train_data = Dataset.Tabular.register_pandas_dataframe(\n",
" train_data.to_pandas_dataframe(), target=(ds, \"machineTrainData\"), name=\"train_data\")\n",
"\n",
"test_data = Dataset.Tabular.register_pandas_dataframe(\n",
" test_data.to_pandas_dataframe(), target=(ds, \"machineTestData\"), name=\"test_data\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -304,7 +327,8 @@
"metadata": {},
"source": [
"#### Show hyperparameters\n",
"Show the model pipeline used for the best run with its hyperparameters."
"Show the model pipeline used for the best run with its hyperparameters.\n",
"For ensemble pipelines it shows the iterations and algorithms that are ensembled."
]
},
{
@@ -313,8 +337,19 @@
"metadata": {},
"outputs": [],
"source": [
"run_properties = json.loads(best_run.get_details()['properties']['pipeline_script'])\n",
"print(json.dumps(run_properties, indent = 1)) "
"run_properties = best_run.get_details()['properties']\n",
"pipeline_script = json.loads(run_properties['pipeline_script'])\n",
"print(json.dumps(pipeline_script, indent = 1)) \n",
"\n",
"if 'ensembled_iterations' in run_properties:\n",
" print(\"\")\n",
" print(\"Ensembled Iterations\")\n",
" print(run_properties['ensembled_iterations'])\n",
" \n",
"if 'ensembled_algorithms' in run_properties:\n",
" print(\"\")\n",
" print(\"Ensembled Algorithms\")\n",
" print(run_properties['ensembled_algorithms'])"
]
},
{

View File

@@ -5,6 +5,7 @@ import json
import os
import re
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
@@ -121,7 +122,7 @@ def calculate_scores_and_build_plots(
input_dir: str, output_dir: str, automl_settings: Dict[str, Any]
):
os.makedirs(output_dir, exist_ok=True)
grains = automl_settings.get(constants.TimeSeries.GRAIN_COLUMN_NAMES)
grains = automl_settings.get(constants.TimeSeries.TIME_SERIES_ID_COLUMN_NAMES)
time_column_name = automl_settings.get(constants.TimeSeries.TIME_COLUMN_NAME)
if grains is None:
grains = []
@@ -146,6 +147,9 @@ def calculate_scores_and_build_plots(
_draw_one_plot(one_forecast, time_column_name, grains, pdf)
pdf.close()
forecast_df.to_csv(os.path.join(output_dir, FORECASTS_FILE), index=False)
# Remove np.NaN and np.inf from the prediction and actuals data.
forecast_df.replace([np.inf, -np.inf], np.nan, inplace=True)
forecast_df.dropna(subset=[ACTUALS, PREDICTIONS], inplace=True)
metrics = compute_all_metrics(forecast_df, grains + [BACKTEST_ITER])
metrics.to_csv(os.path.join(output_dir, SCORES_FILE), index=False)

View File

@@ -86,7 +86,8 @@
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Default datastore name\"] = dstore.name\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
@@ -322,10 +323,10 @@
"| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n",
"| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n",
"| **label_column_name** | The name of the label column. |\n",
"| **max_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n",
"| **forecast_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n",
"| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n",
"| **time_column_name** | The name of your time column. |\n",
"| **grain_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n",
"| **time_series_id_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n",
"| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n",
"| **partition_column_names** | The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series. |"
]
@@ -354,8 +355,8 @@
" \"label_column_name\": TARGET_COLNAME,\n",
" \"n_cross_validations\": 3,\n",
" \"time_column_name\": TIME_COLNAME,\n",
" \"max_horizon\": 6,\n",
" \"grain_column_names\": partition_column_names,\n",
" \"forecast_horizon\": 6,\n",
" \"time_series_id_column_names\": partition_column_names,\n",
" \"track_child_runs\": False,\n",
"}\n",
"\n",

View File

@@ -5,6 +5,7 @@ import json
import os
import re
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
@@ -146,6 +147,9 @@ def calculate_scores_and_build_plots(
_draw_one_plot(one_forecast, time_column_name, grains, pdf)
pdf.close()
forecast_df.to_csv(os.path.join(output_dir, FORECASTS_FILE), index=False)
# Remove np.NaN and np.inf from the prediction and actuals data.
forecast_df.replace([np.inf, -np.inf], np.nan, inplace=True)
forecast_df.dropna(subset=[ACTUALS, PREDICTIONS], inplace=True)
metrics = compute_all_metrics(forecast_df, grains + [BACKTEST_ITER])
metrics.to_csv(os.path.join(output_dir, SCORES_FILE), index=False)

View File

@@ -100,7 +100,8 @@
"output[\"SKU\"] = ws.sku\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]

View File

@@ -1,20 +0,0 @@
DATE,grain,BeerProduction
2017-01-01,grain,9049
2017-02-01,grain,10458
2017-03-01,grain,12489
2017-04-01,grain,11499
2017-05-01,grain,13553
2017-06-01,grain,14740
2017-07-01,grain,11424
2017-08-01,grain,13412
2017-09-01,grain,11917
2017-10-01,grain,12721
2017-11-01,grain,13272
2017-12-01,grain,14278
2018-01-01,grain,9572
2018-02-01,grain,10423
2018-03-01,grain,12667
2018-04-01,grain,11904
2018-05-01,grain,14120
2018-06-01,grain,14565
2018-07-01,grain,12622
1 DATE grain BeerProduction
2 2017-01-01 grain 9049
3 2017-02-01 grain 10458
4 2017-03-01 grain 12489
5 2017-04-01 grain 11499
6 2017-05-01 grain 13553
7 2017-06-01 grain 14740
8 2017-07-01 grain 11424
9 2017-08-01 grain 13412
10 2017-09-01 grain 11917
11 2017-10-01 grain 12721
12 2017-11-01 grain 13272
13 2017-12-01 grain 14278
14 2018-01-01 grain 9572
15 2018-02-01 grain 10423
16 2018-03-01 grain 12667
17 2018-04-01 grain 11904
18 2018-05-01 grain 14120
19 2018-06-01 grain 14565
20 2018-07-01 grain 12622

View File

@@ -1,301 +0,0 @@
DATE,grain,BeerProduction
1992-01-01,grain,3459
1992-02-01,grain,3458
1992-03-01,grain,4002
1992-04-01,grain,4564
1992-05-01,grain,4221
1992-06-01,grain,4529
1992-07-01,grain,4466
1992-08-01,grain,4137
1992-09-01,grain,4126
1992-10-01,grain,4259
1992-11-01,grain,4240
1992-12-01,grain,4936
1993-01-01,grain,3031
1993-02-01,grain,3261
1993-03-01,grain,4160
1993-04-01,grain,4377
1993-05-01,grain,4307
1993-06-01,grain,4696
1993-07-01,grain,4458
1993-08-01,grain,4457
1993-09-01,grain,4364
1993-10-01,grain,4236
1993-11-01,grain,4500
1993-12-01,grain,4974
1994-01-01,grain,3075
1994-02-01,grain,3377
1994-03-01,grain,4443
1994-04-01,grain,4261
1994-05-01,grain,4460
1994-06-01,grain,4985
1994-07-01,grain,4324
1994-08-01,grain,4719
1994-09-01,grain,4374
1994-10-01,grain,4248
1994-11-01,grain,4784
1994-12-01,grain,4971
1995-01-01,grain,3370
1995-02-01,grain,3484
1995-03-01,grain,4269
1995-04-01,grain,3994
1995-05-01,grain,4715
1995-06-01,grain,4974
1995-07-01,grain,4223
1995-08-01,grain,5000
1995-09-01,grain,4235
1995-10-01,grain,4554
1995-11-01,grain,4851
1995-12-01,grain,4826
1996-01-01,grain,3699
1996-02-01,grain,3983
1996-03-01,grain,4262
1996-04-01,grain,4619
1996-05-01,grain,5219
1996-06-01,grain,4836
1996-07-01,grain,4941
1996-08-01,grain,5062
1996-09-01,grain,4365
1996-10-01,grain,5012
1996-11-01,grain,4850
1996-12-01,grain,5097
1997-01-01,grain,3758
1997-02-01,grain,3825
1997-03-01,grain,4454
1997-04-01,grain,4635
1997-05-01,grain,5210
1997-06-01,grain,5057
1997-07-01,grain,5231
1997-08-01,grain,5034
1997-09-01,grain,4970
1997-10-01,grain,5342
1997-11-01,grain,4831
1997-12-01,grain,5965
1998-01-01,grain,3796
1998-02-01,grain,4019
1998-03-01,grain,4898
1998-04-01,grain,5090
1998-05-01,grain,5237
1998-06-01,grain,5447
1998-07-01,grain,5435
1998-08-01,grain,5107
1998-09-01,grain,5515
1998-10-01,grain,5583
1998-11-01,grain,5346
1998-12-01,grain,6286
1999-01-01,grain,4032
1999-02-01,grain,4435
1999-03-01,grain,5479
1999-04-01,grain,5483
1999-05-01,grain,5587
1999-06-01,grain,6176
1999-07-01,grain,5621
1999-08-01,grain,5889
1999-09-01,grain,5828
1999-10-01,grain,5849
1999-11-01,grain,6180
1999-12-01,grain,6771
2000-01-01,grain,4243
2000-02-01,grain,4952
2000-03-01,grain,6008
2000-04-01,grain,5353
2000-05-01,grain,6435
2000-06-01,grain,6673
2000-07-01,grain,5636
2000-08-01,grain,6630
2000-09-01,grain,5887
2000-10-01,grain,6322
2000-11-01,grain,6520
2000-12-01,grain,6678
2001-01-01,grain,5082
2001-02-01,grain,5216
2001-03-01,grain,5893
2001-04-01,grain,5894
2001-05-01,grain,6799
2001-06-01,grain,6667
2001-07-01,grain,6374
2001-08-01,grain,6840
2001-09-01,grain,5575
2001-10-01,grain,6545
2001-11-01,grain,6789
2001-12-01,grain,7180
2002-01-01,grain,5117
2002-02-01,grain,5442
2002-03-01,grain,6337
2002-04-01,grain,6525
2002-05-01,grain,7216
2002-06-01,grain,6761
2002-07-01,grain,6958
2002-08-01,grain,7070
2002-09-01,grain,6148
2002-10-01,grain,6924
2002-11-01,grain,6716
2002-12-01,grain,7975
2003-01-01,grain,5326
2003-02-01,grain,5609
2003-03-01,grain,6414
2003-04-01,grain,6741
2003-05-01,grain,7144
2003-06-01,grain,7133
2003-07-01,grain,7568
2003-08-01,grain,7266
2003-09-01,grain,6634
2003-10-01,grain,7626
2003-11-01,grain,6843
2003-12-01,grain,8540
2004-01-01,grain,5629
2004-02-01,grain,5898
2004-03-01,grain,7045
2004-04-01,grain,7094
2004-05-01,grain,7333
2004-06-01,grain,7918
2004-07-01,grain,7289
2004-08-01,grain,7396
2004-09-01,grain,7259
2004-10-01,grain,7268
2004-11-01,grain,7731
2004-12-01,grain,9058
2005-01-01,grain,5557
2005-02-01,grain,6237
2005-03-01,grain,7723
2005-04-01,grain,7262
2005-05-01,grain,8241
2005-06-01,grain,8757
2005-07-01,grain,7352
2005-08-01,grain,8496
2005-09-01,grain,7741
2005-10-01,grain,7710
2005-11-01,grain,8247
2005-12-01,grain,8902
2006-01-01,grain,6066
2006-02-01,grain,6590
2006-03-01,grain,7923
2006-04-01,grain,7335
2006-05-01,grain,8843
2006-06-01,grain,9327
2006-07-01,grain,7792
2006-08-01,grain,9156
2006-09-01,grain,8037
2006-10-01,grain,8640
2006-11-01,grain,9128
2006-12-01,grain,9545
2007-01-01,grain,6627
2007-02-01,grain,6743
2007-03-01,grain,8195
2007-04-01,grain,7828
2007-05-01,grain,9570
2007-06-01,grain,9484
2007-07-01,grain,8608
2007-08-01,grain,9543
2007-09-01,grain,8123
2007-10-01,grain,9649
2007-11-01,grain,9390
2007-12-01,grain,10065
2008-01-01,grain,7093
2008-02-01,grain,7483
2008-03-01,grain,8365
2008-04-01,grain,8895
2008-05-01,grain,9794
2008-06-01,grain,9977
2008-07-01,grain,9553
2008-08-01,grain,9375
2008-09-01,grain,9225
2008-10-01,grain,9948
2008-11-01,grain,8758
2008-12-01,grain,10839
2009-01-01,grain,7266
2009-02-01,grain,7578
2009-03-01,grain,8688
2009-04-01,grain,9162
2009-05-01,grain,9369
2009-06-01,grain,10167
2009-07-01,grain,9507
2009-08-01,grain,8923
2009-09-01,grain,9272
2009-10-01,grain,9075
2009-11-01,grain,8949
2009-12-01,grain,10843
2010-01-01,grain,6558
2010-02-01,grain,7481
2010-03-01,grain,9475
2010-04-01,grain,9424
2010-05-01,grain,9351
2010-06-01,grain,10552
2010-07-01,grain,9077
2010-08-01,grain,9273
2010-09-01,grain,9420
2010-10-01,grain,9413
2010-11-01,grain,9866
2010-12-01,grain,11455
2011-01-01,grain,6901
2011-02-01,grain,8014
2011-03-01,grain,9832
2011-04-01,grain,9281
2011-05-01,grain,9967
2011-06-01,grain,11344
2011-07-01,grain,9106
2011-08-01,grain,10469
2011-09-01,grain,10085
2011-10-01,grain,9612
2011-11-01,grain,10328
2011-12-01,grain,11483
2012-01-01,grain,7486
2012-02-01,grain,8641
2012-03-01,grain,9709
2012-04-01,grain,9423
2012-05-01,grain,11342
2012-06-01,grain,11274
2012-07-01,grain,9845
2012-08-01,grain,11163
2012-09-01,grain,9532
2012-10-01,grain,10754
2012-11-01,grain,10953
2012-12-01,grain,11922
2013-01-01,grain,8395
2013-02-01,grain,8888
2013-03-01,grain,10110
2013-04-01,grain,10493
2013-05-01,grain,12218
2013-06-01,grain,11385
2013-07-01,grain,11186
2013-08-01,grain,11462
2013-09-01,grain,10494
2013-10-01,grain,11540
2013-11-01,grain,11138
2013-12-01,grain,12709
2014-01-01,grain,8557
2014-02-01,grain,9059
2014-03-01,grain,10055
2014-04-01,grain,10977
2014-05-01,grain,11792
2014-06-01,grain,11904
2014-07-01,grain,10965
2014-08-01,grain,10981
2014-09-01,grain,10828
2014-10-01,grain,11817
2014-11-01,grain,10470
2014-12-01,grain,13310
2015-01-01,grain,8400
2015-02-01,grain,9062
2015-03-01,grain,10722
2015-04-01,grain,11107
2015-05-01,grain,11508
2015-06-01,grain,12904
2015-07-01,grain,11869
2015-08-01,grain,11224
2015-09-01,grain,12022
2015-10-01,grain,11983
2015-11-01,grain,11506
2015-12-01,grain,14183
2016-01-01,grain,8650
2016-02-01,grain,10323
2016-03-01,grain,12110
2016-04-01,grain,11424
2016-05-01,grain,12243
2016-06-01,grain,13686
2016-07-01,grain,10956
2016-08-01,grain,12706
2016-09-01,grain,12279
2016-10-01,grain,11914
2016-11-01,grain,13025
2016-12-01,grain,14431
1 DATE grain BeerProduction
2 1992-01-01 grain 3459
3 1992-02-01 grain 3458
4 1992-03-01 grain 4002
5 1992-04-01 grain 4564
6 1992-05-01 grain 4221
7 1992-06-01 grain 4529
8 1992-07-01 grain 4466
9 1992-08-01 grain 4137
10 1992-09-01 grain 4126
11 1992-10-01 grain 4259
12 1992-11-01 grain 4240
13 1992-12-01 grain 4936
14 1993-01-01 grain 3031
15 1993-02-01 grain 3261
16 1993-03-01 grain 4160
17 1993-04-01 grain 4377
18 1993-05-01 grain 4307
19 1993-06-01 grain 4696
20 1993-07-01 grain 4458
21 1993-08-01 grain 4457
22 1993-09-01 grain 4364
23 1993-10-01 grain 4236
24 1993-11-01 grain 4500
25 1993-12-01 grain 4974
26 1994-01-01 grain 3075
27 1994-02-01 grain 3377
28 1994-03-01 grain 4443
29 1994-04-01 grain 4261
30 1994-05-01 grain 4460
31 1994-06-01 grain 4985
32 1994-07-01 grain 4324
33 1994-08-01 grain 4719
34 1994-09-01 grain 4374
35 1994-10-01 grain 4248
36 1994-11-01 grain 4784
37 1994-12-01 grain 4971
38 1995-01-01 grain 3370
39 1995-02-01 grain 3484
40 1995-03-01 grain 4269
41 1995-04-01 grain 3994
42 1995-05-01 grain 4715
43 1995-06-01 grain 4974
44 1995-07-01 grain 4223
45 1995-08-01 grain 5000
46 1995-09-01 grain 4235
47 1995-10-01 grain 4554
48 1995-11-01 grain 4851
49 1995-12-01 grain 4826
50 1996-01-01 grain 3699
51 1996-02-01 grain 3983
52 1996-03-01 grain 4262
53 1996-04-01 grain 4619
54 1996-05-01 grain 5219
55 1996-06-01 grain 4836
56 1996-07-01 grain 4941
57 1996-08-01 grain 5062
58 1996-09-01 grain 4365
59 1996-10-01 grain 5012
60 1996-11-01 grain 4850
61 1996-12-01 grain 5097
62 1997-01-01 grain 3758
63 1997-02-01 grain 3825
64 1997-03-01 grain 4454
65 1997-04-01 grain 4635
66 1997-05-01 grain 5210
67 1997-06-01 grain 5057
68 1997-07-01 grain 5231
69 1997-08-01 grain 5034
70 1997-09-01 grain 4970
71 1997-10-01 grain 5342
72 1997-11-01 grain 4831
73 1997-12-01 grain 5965
74 1998-01-01 grain 3796
75 1998-02-01 grain 4019
76 1998-03-01 grain 4898
77 1998-04-01 grain 5090
78 1998-05-01 grain 5237
79 1998-06-01 grain 5447
80 1998-07-01 grain 5435
81 1998-08-01 grain 5107
82 1998-09-01 grain 5515
83 1998-10-01 grain 5583
84 1998-11-01 grain 5346
85 1998-12-01 grain 6286
86 1999-01-01 grain 4032
87 1999-02-01 grain 4435
88 1999-03-01 grain 5479
89 1999-04-01 grain 5483
90 1999-05-01 grain 5587
91 1999-06-01 grain 6176
92 1999-07-01 grain 5621
93 1999-08-01 grain 5889
94 1999-09-01 grain 5828
95 1999-10-01 grain 5849
96 1999-11-01 grain 6180
97 1999-12-01 grain 6771
98 2000-01-01 grain 4243
99 2000-02-01 grain 4952
100 2000-03-01 grain 6008
101 2000-04-01 grain 5353
102 2000-05-01 grain 6435
103 2000-06-01 grain 6673
104 2000-07-01 grain 5636
105 2000-08-01 grain 6630
106 2000-09-01 grain 5887
107 2000-10-01 grain 6322
108 2000-11-01 grain 6520
109 2000-12-01 grain 6678
110 2001-01-01 grain 5082
111 2001-02-01 grain 5216
112 2001-03-01 grain 5893
113 2001-04-01 grain 5894
114 2001-05-01 grain 6799
115 2001-06-01 grain 6667
116 2001-07-01 grain 6374
117 2001-08-01 grain 6840
118 2001-09-01 grain 5575
119 2001-10-01 grain 6545
120 2001-11-01 grain 6789
121 2001-12-01 grain 7180
122 2002-01-01 grain 5117
123 2002-02-01 grain 5442
124 2002-03-01 grain 6337
125 2002-04-01 grain 6525
126 2002-05-01 grain 7216
127 2002-06-01 grain 6761
128 2002-07-01 grain 6958
129 2002-08-01 grain 7070
130 2002-09-01 grain 6148
131 2002-10-01 grain 6924
132 2002-11-01 grain 6716
133 2002-12-01 grain 7975
134 2003-01-01 grain 5326
135 2003-02-01 grain 5609
136 2003-03-01 grain 6414
137 2003-04-01 grain 6741
138 2003-05-01 grain 7144
139 2003-06-01 grain 7133
140 2003-07-01 grain 7568
141 2003-08-01 grain 7266
142 2003-09-01 grain 6634
143 2003-10-01 grain 7626
144 2003-11-01 grain 6843
145 2003-12-01 grain 8540
146 2004-01-01 grain 5629
147 2004-02-01 grain 5898
148 2004-03-01 grain 7045
149 2004-04-01 grain 7094
150 2004-05-01 grain 7333
151 2004-06-01 grain 7918
152 2004-07-01 grain 7289
153 2004-08-01 grain 7396
154 2004-09-01 grain 7259
155 2004-10-01 grain 7268
156 2004-11-01 grain 7731
157 2004-12-01 grain 9058
158 2005-01-01 grain 5557
159 2005-02-01 grain 6237
160 2005-03-01 grain 7723
161 2005-04-01 grain 7262
162 2005-05-01 grain 8241
163 2005-06-01 grain 8757
164 2005-07-01 grain 7352
165 2005-08-01 grain 8496
166 2005-09-01 grain 7741
167 2005-10-01 grain 7710
168 2005-11-01 grain 8247
169 2005-12-01 grain 8902
170 2006-01-01 grain 6066
171 2006-02-01 grain 6590
172 2006-03-01 grain 7923
173 2006-04-01 grain 7335
174 2006-05-01 grain 8843
175 2006-06-01 grain 9327
176 2006-07-01 grain 7792
177 2006-08-01 grain 9156
178 2006-09-01 grain 8037
179 2006-10-01 grain 8640
180 2006-11-01 grain 9128
181 2006-12-01 grain 9545
182 2007-01-01 grain 6627
183 2007-02-01 grain 6743
184 2007-03-01 grain 8195
185 2007-04-01 grain 7828
186 2007-05-01 grain 9570
187 2007-06-01 grain 9484
188 2007-07-01 grain 8608
189 2007-08-01 grain 9543
190 2007-09-01 grain 8123
191 2007-10-01 grain 9649
192 2007-11-01 grain 9390
193 2007-12-01 grain 10065
194 2008-01-01 grain 7093
195 2008-02-01 grain 7483
196 2008-03-01 grain 8365
197 2008-04-01 grain 8895
198 2008-05-01 grain 9794
199 2008-06-01 grain 9977
200 2008-07-01 grain 9553
201 2008-08-01 grain 9375
202 2008-09-01 grain 9225
203 2008-10-01 grain 9948
204 2008-11-01 grain 8758
205 2008-12-01 grain 10839
206 2009-01-01 grain 7266
207 2009-02-01 grain 7578
208 2009-03-01 grain 8688
209 2009-04-01 grain 9162
210 2009-05-01 grain 9369
211 2009-06-01 grain 10167
212 2009-07-01 grain 9507
213 2009-08-01 grain 8923
214 2009-09-01 grain 9272
215 2009-10-01 grain 9075
216 2009-11-01 grain 8949
217 2009-12-01 grain 10843
218 2010-01-01 grain 6558
219 2010-02-01 grain 7481
220 2010-03-01 grain 9475
221 2010-04-01 grain 9424
222 2010-05-01 grain 9351
223 2010-06-01 grain 10552
224 2010-07-01 grain 9077
225 2010-08-01 grain 9273
226 2010-09-01 grain 9420
227 2010-10-01 grain 9413
228 2010-11-01 grain 9866
229 2010-12-01 grain 11455
230 2011-01-01 grain 6901
231 2011-02-01 grain 8014
232 2011-03-01 grain 9832
233 2011-04-01 grain 9281
234 2011-05-01 grain 9967
235 2011-06-01 grain 11344
236 2011-07-01 grain 9106
237 2011-08-01 grain 10469
238 2011-09-01 grain 10085
239 2011-10-01 grain 9612
240 2011-11-01 grain 10328
241 2011-12-01 grain 11483
242 2012-01-01 grain 7486
243 2012-02-01 grain 8641
244 2012-03-01 grain 9709
245 2012-04-01 grain 9423
246 2012-05-01 grain 11342
247 2012-06-01 grain 11274
248 2012-07-01 grain 9845
249 2012-08-01 grain 11163
250 2012-09-01 grain 9532
251 2012-10-01 grain 10754
252 2012-11-01 grain 10953
253 2012-12-01 grain 11922
254 2013-01-01 grain 8395
255 2013-02-01 grain 8888
256 2013-03-01 grain 10110
257 2013-04-01 grain 10493
258 2013-05-01 grain 12218
259 2013-06-01 grain 11385
260 2013-07-01 grain 11186
261 2013-08-01 grain 11462
262 2013-09-01 grain 10494
263 2013-10-01 grain 11540
264 2013-11-01 grain 11138
265 2013-12-01 grain 12709
266 2014-01-01 grain 8557
267 2014-02-01 grain 9059
268 2014-03-01 grain 10055
269 2014-04-01 grain 10977
270 2014-05-01 grain 11792
271 2014-06-01 grain 11904
272 2014-07-01 grain 10965
273 2014-08-01 grain 10981
274 2014-09-01 grain 10828
275 2014-10-01 grain 11817
276 2014-11-01 grain 10470
277 2014-12-01 grain 13310
278 2015-01-01 grain 8400
279 2015-02-01 grain 9062
280 2015-03-01 grain 10722
281 2015-04-01 grain 11107
282 2015-05-01 grain 11508
283 2015-06-01 grain 12904
284 2015-07-01 grain 11869
285 2015-08-01 grain 11224
286 2015-09-01 grain 12022
287 2015-10-01 grain 11983
288 2015-11-01 grain 11506
289 2015-12-01 grain 14183
290 2016-01-01 grain 8650
291 2016-02-01 grain 10323
292 2016-03-01 grain 12110
293 2016-04-01 grain 11424
294 2016-05-01 grain 12243
295 2016-06-01 grain 13686
296 2016-07-01 grain 10956
297 2016-08-01 grain 12706
298 2016-09-01 grain 12279
299 2016-10-01 grain 11914
300 2016-11-01 grain 13025
301 2016-12-01 grain 14431

View File

@@ -1,4 +0,0 @@
name: auto-ml-forecasting-beer-remote
dependencies:
- pip:
- azureml-sdk

View File

@@ -64,22 +64,23 @@
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import pandas as pd\n",
"import numpy as np\n",
"import json\n",
"import logging\n",
"\n",
"from azureml.core import Workspace, Experiment, Dataset\n",
"from azureml.train.automl import AutoMLConfig\n",
"from datetime import datetime\n",
"from azureml.automl.core.featurization import FeaturizationConfig"
"\n",
"import azureml.core\n",
"import numpy as np\n",
"import pandas as pd\n",
"from azureml.automl.core.featurization import FeaturizationConfig\n",
"from azureml.core import Dataset, Experiment, Workspace\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
"This notebook is compatible with Azure ML SDK version 1.35.0 or later."
]
},
{
@@ -88,7 +89,6 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -119,7 +119,8 @@
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
@@ -398,8 +399,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"Below we select the best model from all the training iterations using get_output method."
"### Retrieve the Best Run details\n",
"Below we retrieve the best Run object from among all the runs in the experiment."
]
},
{
@@ -408,8 +409,8 @@
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"fitted_model.steps"
"best_run = remote_run.get_best_child()\n",
"best_run"
]
},
{
@@ -418,7 +419,7 @@
"source": [
"## Featurization\n",
"\n",
"You can access the engineered feature names generated in time-series featurization. Note that a number of named holiday periods are represented. We recommend that you have at least one year of data when using this feature to ensure that all yearly holidays are captured in the training featurization."
"We can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs. Note that a number of named holiday periods are represented. We recommend that you have at least one year of data when using this feature to ensure that all yearly holidays are captured in the training featurization."
]
},
{
@@ -427,7 +428,14 @@
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps[\"timeseriestransformer\"].get_engineered_feature_names()"
"# Download the JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/engineered_feature_names.json\", \"engineered_feature_names.json\"\n",
")\n",
"with open(\"engineered_feature_names.json\", \"r\") as f:\n",
" records = json.load(f)\n",
"\n",
"records"
]
},
{
@@ -451,12 +459,26 @@
"metadata": {},
"outputs": [],
"source": [
"# Get the featurization summary as a list of JSON\n",
"featurization_summary = fitted_model.named_steps[\n",
" \"timeseriestransformer\"\n",
"].get_featurization_summary()\n",
"# View the featurization summary as a pandas dataframe\n",
"pd.DataFrame.from_records(featurization_summary)"
"# Download the featurization summary JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n",
")\n",
"\n",
"# Render the JSON as a pandas DataFrame\n",
"with open(\"featurization_summary.json\", \"r\") as f:\n",
" records = json.load(f)\n",
"fs = pd.DataFrame.from_records(records)\n",
"\n",
"# View a summary of the featurization\n",
"fs[\n",
" [\n",
" \"RawFeatureName\",\n",
" \"TypeDetected\",\n",
" \"Dropped\",\n",
" \"EngineeredFeatureCount\",\n",
" \"Transformations\",\n",
" ]\n",
"]"
]
},
{

View File

@@ -68,6 +68,7 @@
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"import logging\n",
"\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score\n",
@@ -90,7 +91,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
"This notebook is compatible with Azure ML SDK version 1.35.0 or later."
]
},
{
@@ -99,7 +100,6 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -132,7 +132,8 @@
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
@@ -398,8 +399,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retrieve the Best Model\n",
"Below we select the best model from all the training iterations using get_output method."
"## Retrieve the Best Run details\n",
"Below we retrieve the best Run object from among all the runs in the experiment."
]
},
{
@@ -408,8 +409,8 @@
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"fitted_model.steps"
"best_run = remote_run.get_best_child()\n",
"best_run"
]
},
{
@@ -417,7 +418,7 @@
"metadata": {},
"source": [
"## Featurization\n",
"You can access the engineered feature names generated in time-series featurization."
"We can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs."
]
},
{
@@ -426,7 +427,14 @@
"metadata": {},
"outputs": [],
"source": [
"fitted_model.named_steps[\"timeseriestransformer\"].get_engineered_feature_names()"
"# Download the JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/engineered_feature_names.json\", \"engineered_feature_names.json\"\n",
")\n",
"with open(\"engineered_feature_names.json\", \"r\") as f:\n",
" records = json.load(f)\n",
"\n",
"records"
]
},
{
@@ -449,12 +457,26 @@
"metadata": {},
"outputs": [],
"source": [
"# Get the featurization summary as a list of JSON\n",
"featurization_summary = fitted_model.named_steps[\n",
" \"timeseriestransformer\"\n",
"].get_featurization_summary()\n",
"# View the featurization summary as a pandas dataframe\n",
"pd.DataFrame.from_records(featurization_summary)"
"# Download the featurization summary JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n",
")\n",
"\n",
"# Render the JSON as a pandas DataFrame\n",
"with open(\"featurization_summary.json\", \"r\") as f:\n",
" records = json.load(f)\n",
"fs = pd.DataFrame.from_records(records)\n",
"\n",
"# View a summary of the featurization\n",
"fs[\n",
" [\n",
" \"RawFeatureName\",\n",
" \"TypeDetected\",\n",
" \"Dropped\",\n",
" \"EngineeredFeatureCount\",\n",
" \"Transformations\",\n",
" ]\n",
"]"
]
},
{
@@ -481,7 +503,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retreiving forecasts from the model\n",
"### Retrieving forecasts from the model\n",
"We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute."
]
},
@@ -641,7 +663,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model"
"### Retrieve the Best Run details"
]
},
{
@@ -650,7 +672,8 @@
"metadata": {},
"outputs": [],
"source": [
"best_run_lags, fitted_model_lags = advanced_remote_run.get_output()"
"best_run_lags = remote_run.get_best_child()\n",
"best_run_lags"
]
},
{

View File

@@ -85,7 +85,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
"This notebook is compatible with Azure ML SDK version 1.35.0 or later."
]
},
{
@@ -94,7 +94,6 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -122,7 +121,8 @@
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]

View File

@@ -30,7 +30,7 @@
},
"source": [
"# Automated Machine Learning\n",
"**Beer Production Forecasting**\n",
"**Github DAU Forecasting**\n",
"\n",
"## Contents\n",
"1. [Introduction](#Introduction)\n",
@@ -48,7 +48,7 @@
},
"source": [
"## Introduction\n",
"This notebook demonstrates demand forecasting for Beer Production Dataset using AutoML.\n",
"This notebook demonstrates demand forecasting for Github Daily Active Users Dataset using AutoML.\n",
"\n",
"AutoML highlights here include using Deep Learning forecasts, Arima, Prophet, Remote Execution and Remote Inferencing, and working with the `forecast` function. Please also look at the additional forecasting notebooks, which document lagging, rolling windows, forecast quantiles, other ways to use the forecast function, and forecaster deployment.\n",
"\n",
@@ -57,7 +57,7 @@
"Notebook synopsis:\n",
"\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Configuration and remote run of AutoML for a time-series model exploring Regression learners, Arima, Prophet and DNNs\n",
"2. Configuration and remote run of AutoML for a time-series model exploring DNNs\n",
"4. Evaluating the fitted model using a rolling test "
]
},
@@ -92,8 +92,7 @@
"# Squash warning messages for cleaner output in the notebook\n",
"warnings.showwarning = lambda *args, **kwargs: None\n",
"\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core import Workspace, Experiment, Dataset\n",
"from azureml.train.automl import AutoMLConfig\n",
"from matplotlib import pyplot as plt\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error\n",
@@ -104,7 +103,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
"This notebook is compatible with Azure ML SDK version 1.35.0 or later."
]
},
{
@@ -113,7 +112,6 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -139,7 +137,7 @@
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = \"beer-remote-cpu\"\n",
"experiment_name = \"github-remote-cpu\"\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
@@ -149,7 +147,8 @@
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
@@ -180,7 +179,7 @@
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"beer-cluster\"\n",
"cpu_cluster_name = \"github-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
@@ -203,7 +202,7 @@
},
"source": [
"## Data\n",
"Read Beer demand data from file, and preview data."
"Read Github DAU data from file, and preview data."
]
},
{
@@ -246,21 +245,19 @@
"plt.tight_layout()\n",
"\n",
"plt.subplot(2, 1, 1)\n",
"plt.title(\"Beer Production By Year\")\n",
"df = pd.read_csv(\n",
" \"Beer_no_valid_split_train.csv\", parse_dates=True, index_col=\"DATE\"\n",
").drop(columns=\"grain\")\n",
"plt.title(\"Github Daily Active User By Year\")\n",
"df = pd.read_csv(\"github_dau_2011-2018_train.csv\", parse_dates=True, index_col=\"date\")\n",
"test_df = pd.read_csv(\n",
" \"Beer_no_valid_split_test.csv\", parse_dates=True, index_col=\"DATE\"\n",
").drop(columns=\"grain\")\n",
" \"github_dau_2011-2018_test.csv\", parse_dates=True, index_col=\"date\"\n",
")\n",
"plt.plot(df)\n",
"\n",
"plt.subplot(2, 1, 2)\n",
"plt.title(\"Beer Production By Month\")\n",
"plt.title(\"Github Daily Active User By Month\")\n",
"groups = df.groupby(df.index.month)\n",
"months = concat([DataFrame(x[1].values) for x in groups], axis=1)\n",
"months = DataFrame(months)\n",
"months.columns = range(1, 13)\n",
"months.columns = range(1, 49)\n",
"months.boxplot()\n",
"\n",
"plt.show()"
@@ -275,10 +272,10 @@
},
"outputs": [],
"source": [
"target_column_name = \"BeerProduction\"\n",
"time_column_name = \"DATE\"\n",
"target_column_name = \"count\"\n",
"time_column_name = \"date\"\n",
"time_series_id_column_names = []\n",
"freq = \"M\" # Monthly data"
"freq = \"D\" # Daily data"
]
},
{
@@ -301,40 +298,21 @@
"from helper import split_full_for_forecasting\n",
"\n",
"train, valid = split_full_for_forecasting(df, time_column_name)\n",
"train.to_csv(\"train.csv\")\n",
"valid.to_csv(\"valid.csv\")\n",
"test_df.to_csv(\"test.csv\")\n",
"\n",
"# Reset index to create a Tabualr Dataset.\n",
"train.reset_index(inplace=True)\n",
"valid.reset_index(inplace=True)\n",
"test_df.reset_index(inplace=True)\n",
"\n",
"datastore = ws.get_default_datastore()\n",
"datastore.upload_files(\n",
" files=[\"./train.csv\"],\n",
" target_path=\"beer-dataset/tabular/\",\n",
" overwrite=True,\n",
" show_progress=True,\n",
"train_dataset = Dataset.Tabular.register_pandas_dataframe(\n",
" train, target=(datastore, \"dataset/\"), name=\"Github_DAU_train\"\n",
")\n",
"datastore.upload_files(\n",
" files=[\"./valid.csv\"],\n",
" target_path=\"beer-dataset/tabular/\",\n",
" overwrite=True,\n",
" show_progress=True,\n",
"valid_dataset = Dataset.Tabular.register_pandas_dataframe(\n",
" valid, target=(datastore, \"dataset/\"), name=\"Github_DAU_valid\"\n",
")\n",
"datastore.upload_files(\n",
" files=[\"./test.csv\"],\n",
" target_path=\"beer-dataset/tabular/\",\n",
" overwrite=True,\n",
" show_progress=True,\n",
")\n",
"\n",
"from azureml.core import Dataset\n",
"\n",
"train_dataset = Dataset.Tabular.from_delimited_files(\n",
" path=[(datastore, \"beer-dataset/tabular/train.csv\")]\n",
")\n",
"valid_dataset = Dataset.Tabular.from_delimited_files(\n",
" path=[(datastore, \"beer-dataset/tabular/valid.csv\")]\n",
")\n",
"test_dataset = Dataset.Tabular.from_delimited_files(\n",
" path=[(datastore, \"beer-dataset/tabular/test.csv\")]\n",
"test_dataset = Dataset.Tabular.register_pandas_dataframe(\n",
" test_df, target=(datastore, \"dataset/\"), name=\"Github_DAU_test\"\n",
")"
]
},
@@ -397,10 +375,10 @@
"forecasting_parameters = ForecastingParameters(\n",
" time_column_name=time_column_name,\n",
" forecast_horizon=forecast_horizon,\n",
" freq=\"MS\", # Set the forecast frequency to be monthly (start of the month)\n",
" freq=\"D\", # Set the forecast frequency to be daily\n",
")\n",
"\n",
"# We will disable the enable_early_stopping flag to ensure the DNN model is recommended for demonstration purpose.\n",
"# To only allow the TCNForecaster we set the allowed_models parameter to reflect this.\n",
"automl_config = AutoMLConfig(\n",
" task=\"forecasting\",\n",
" primary_metric=\"normalized_root_mean_squared_error\",\n",
@@ -413,7 +391,7 @@
" max_concurrent_iterations=4,\n",
" max_cores_per_iteration=-1,\n",
" enable_dnn=True,\n",
" enable_early_stopping=False,\n",
" allowed_models=[\"TCNForecaster\"],\n",
" forecasting_parameters=forecasting_parameters,\n",
")"
]
@@ -506,7 +484,9 @@
"if not forecast_model in summary_df[\"run_id\"]:\n",
" forecast_model = \"ForecastTCN\"\n",
"\n",
"best_dnn_run_id = summary_df[\"run_id\"][forecast_model]\n",
"best_dnn_run_id = summary_df[summary_df[\"Score\"] == summary_df[\"Score\"].min()][\n",
" \"run_id\"\n",
"][forecast_model]\n",
"best_dnn_run = Run(experiment, best_dnn_run_id)"
]
},
@@ -567,11 +547,6 @@
},
"outputs": [],
"source": [
"from azureml.core import Dataset\n",
"\n",
"test_dataset = Dataset.Tabular.from_delimited_files(\n",
" path=[(datastore, \"beer-dataset/tabular/test.csv\")]\n",
")\n",
"# preview the first 3 rows of the dataset\n",
"test_dataset.take(5).to_pandas_dataframe()"
]
@@ -582,7 +557,7 @@
"metadata": {},
"outputs": [],
"source": [
"compute_target = ws.compute_targets[\"beer-cluster\"]\n",
"compute_target = ws.compute_targets[\"github-cluster\"]\n",
"test_experiment = Experiment(ws, experiment_name + \"_test\")"
]
},

View File

@@ -0,0 +1,4 @@
name: auto-ml-forecasting-github-dau
dependencies:
- pip:
- azureml-sdk

View File

@@ -0,0 +1,455 @@
date,count,day_of_week,month_of_year,holiday
2017-06-04,104663,6.0,5.0,0.0
2017-06-05,155824,0.0,5.0,0.0
2017-06-06,164908,1.0,5.0,0.0
2017-06-07,170309,2.0,5.0,0.0
2017-06-08,164256,3.0,5.0,0.0
2017-06-09,153406,4.0,5.0,0.0
2017-06-10,97024,5.0,5.0,0.0
2017-06-11,103442,6.0,5.0,0.0
2017-06-12,160768,0.0,5.0,0.0
2017-06-13,166288,1.0,5.0,0.0
2017-06-14,163819,2.0,5.0,0.0
2017-06-15,157593,3.0,5.0,0.0
2017-06-16,149259,4.0,5.0,0.0
2017-06-17,95579,5.0,5.0,0.0
2017-06-18,98723,6.0,5.0,0.0
2017-06-19,159076,0.0,5.0,0.0
2017-06-20,163340,1.0,5.0,0.0
2017-06-21,163344,2.0,5.0,0.0
2017-06-22,159528,3.0,5.0,0.0
2017-06-23,146563,4.0,5.0,0.0
2017-06-24,92631,5.0,5.0,0.0
2017-06-25,96549,6.0,5.0,0.0
2017-06-26,153249,0.0,5.0,0.0
2017-06-27,160357,1.0,5.0,0.0
2017-06-28,159941,2.0,5.0,0.0
2017-06-29,156781,3.0,5.0,0.0
2017-06-30,144709,4.0,5.0,0.0
2017-07-01,89101,5.0,6.0,0.0
2017-07-02,93046,6.0,6.0,0.0
2017-07-03,144113,0.0,6.0,0.0
2017-07-04,143061,1.0,6.0,1.0
2017-07-05,154603,2.0,6.0,0.0
2017-07-06,157200,3.0,6.0,0.0
2017-07-07,147213,4.0,6.0,0.0
2017-07-08,92348,5.0,6.0,0.0
2017-07-09,97018,6.0,6.0,0.0
2017-07-10,157192,0.0,6.0,0.0
2017-07-11,161819,1.0,6.0,0.0
2017-07-12,161998,2.0,6.0,0.0
2017-07-13,160280,3.0,6.0,0.0
2017-07-14,146818,4.0,6.0,0.0
2017-07-15,93041,5.0,6.0,0.0
2017-07-16,97505,6.0,6.0,0.0
2017-07-17,156167,0.0,6.0,0.0
2017-07-18,162855,1.0,6.0,0.0
2017-07-19,162519,2.0,6.0,0.0
2017-07-20,159941,3.0,6.0,0.0
2017-07-21,148460,4.0,6.0,0.0
2017-07-22,93431,5.0,6.0,0.0
2017-07-23,98553,6.0,6.0,0.0
2017-07-24,156202,0.0,6.0,0.0
2017-07-25,162503,1.0,6.0,0.0
2017-07-26,158479,2.0,6.0,0.0
2017-07-27,158192,3.0,6.0,0.0
2017-07-28,147108,4.0,6.0,0.0
2017-07-29,93799,5.0,6.0,0.0
2017-07-30,97920,6.0,6.0,0.0
2017-07-31,152197,0.0,6.0,0.0
2017-08-01,158477,1.0,7.0,0.0
2017-08-02,159089,2.0,7.0,0.0
2017-08-03,157182,3.0,7.0,0.0
2017-08-04,146345,4.0,7.0,0.0
2017-08-05,92534,5.0,7.0,0.0
2017-08-06,97128,6.0,7.0,0.0
2017-08-07,151359,0.0,7.0,0.0
2017-08-08,159895,1.0,7.0,0.0
2017-08-09,158329,2.0,7.0,0.0
2017-08-10,155468,3.0,7.0,0.0
2017-08-11,144914,4.0,7.0,0.0
2017-08-12,92258,5.0,7.0,0.0
2017-08-13,95933,6.0,7.0,0.0
2017-08-14,147706,0.0,7.0,0.0
2017-08-15,151115,1.0,7.0,0.0
2017-08-16,157640,2.0,7.0,0.0
2017-08-17,156600,3.0,7.0,0.0
2017-08-18,146980,4.0,7.0,0.0
2017-08-19,94592,5.0,7.0,0.0
2017-08-20,99320,6.0,7.0,0.0
2017-08-21,145727,0.0,7.0,0.0
2017-08-22,160260,1.0,7.0,0.0
2017-08-23,160440,2.0,7.0,0.0
2017-08-24,157830,3.0,7.0,0.0
2017-08-25,145822,4.0,7.0,0.0
2017-08-26,94706,5.0,7.0,0.0
2017-08-27,99047,6.0,7.0,0.0
2017-08-28,152112,0.0,7.0,0.0
2017-08-29,162440,1.0,7.0,0.0
2017-08-30,162902,2.0,7.0,0.0
2017-08-31,159498,3.0,7.0,0.0
2017-09-01,145689,4.0,8.0,0.0
2017-09-02,93589,5.0,8.0,0.0
2017-09-03,100058,6.0,8.0,0.0
2017-09-04,140865,0.0,8.0,1.0
2017-09-05,165715,1.0,8.0,0.0
2017-09-06,167463,2.0,8.0,0.0
2017-09-07,164811,3.0,8.0,0.0
2017-09-08,156157,4.0,8.0,0.0
2017-09-09,101358,5.0,8.0,0.0
2017-09-10,107915,6.0,8.0,0.0
2017-09-11,167845,0.0,8.0,0.0
2017-09-12,172756,1.0,8.0,0.0
2017-09-13,172851,2.0,8.0,0.0
2017-09-14,171675,3.0,8.0,0.0
2017-09-15,159266,4.0,8.0,0.0
2017-09-16,103547,5.0,8.0,0.0
2017-09-17,110964,6.0,8.0,0.0
2017-09-18,170976,0.0,8.0,0.0
2017-09-19,177864,1.0,8.0,0.0
2017-09-20,173567,2.0,8.0,0.0
2017-09-21,172017,3.0,8.0,0.0
2017-09-22,161357,4.0,8.0,0.0
2017-09-23,104681,5.0,8.0,0.0
2017-09-24,111711,6.0,8.0,0.0
2017-09-25,173517,0.0,8.0,0.0
2017-09-26,180049,1.0,8.0,0.0
2017-09-27,178307,2.0,8.0,0.0
2017-09-28,174157,3.0,8.0,0.0
2017-09-29,161707,4.0,8.0,0.0
2017-09-30,110536,5.0,8.0,0.0
2017-10-01,106505,6.0,9.0,0.0
2017-10-02,157565,0.0,9.0,0.0
2017-10-03,164764,1.0,9.0,0.0
2017-10-04,163383,2.0,9.0,0.0
2017-10-05,162847,3.0,9.0,0.0
2017-10-06,153575,4.0,9.0,0.0
2017-10-07,107472,5.0,9.0,0.0
2017-10-08,116127,6.0,9.0,0.0
2017-10-09,174457,0.0,9.0,1.0
2017-10-10,185217,1.0,9.0,0.0
2017-10-11,185120,2.0,9.0,0.0
2017-10-12,180844,3.0,9.0,0.0
2017-10-13,170178,4.0,9.0,0.0
2017-10-14,112754,5.0,9.0,0.0
2017-10-15,121251,6.0,9.0,0.0
2017-10-16,183906,0.0,9.0,0.0
2017-10-17,188945,1.0,9.0,0.0
2017-10-18,187297,2.0,9.0,0.0
2017-10-19,183867,3.0,9.0,0.0
2017-10-20,173021,4.0,9.0,0.0
2017-10-21,115851,5.0,9.0,0.0
2017-10-22,126088,6.0,9.0,0.0
2017-10-23,189452,0.0,9.0,0.0
2017-10-24,194412,1.0,9.0,0.0
2017-10-25,192293,2.0,9.0,0.0
2017-10-26,190163,3.0,9.0,0.0
2017-10-27,177053,4.0,9.0,0.0
2017-10-28,114934,5.0,9.0,0.0
2017-10-29,125289,6.0,9.0,0.0
2017-10-30,189245,0.0,9.0,0.0
2017-10-31,191480,1.0,9.0,0.0
2017-11-01,182281,2.0,10.0,0.0
2017-11-02,186351,3.0,10.0,0.0
2017-11-03,175422,4.0,10.0,0.0
2017-11-04,118160,5.0,10.0,0.0
2017-11-05,127602,6.0,10.0,0.0
2017-11-06,191067,0.0,10.0,0.0
2017-11-07,197083,1.0,10.0,0.0
2017-11-08,194333,2.0,10.0,0.0
2017-11-09,193914,3.0,10.0,0.0
2017-11-10,179933,4.0,10.0,1.0
2017-11-11,121346,5.0,10.0,0.0
2017-11-12,131900,6.0,10.0,0.0
2017-11-13,196969,0.0,10.0,0.0
2017-11-14,201949,1.0,10.0,0.0
2017-11-15,198424,2.0,10.0,0.0
2017-11-16,196902,3.0,10.0,0.0
2017-11-17,183893,4.0,10.0,0.0
2017-11-18,122767,5.0,10.0,0.0
2017-11-19,130890,6.0,10.0,0.0
2017-11-20,194515,0.0,10.0,0.0
2017-11-21,198601,1.0,10.0,0.0
2017-11-22,191041,2.0,10.0,0.0
2017-11-23,170321,3.0,10.0,1.0
2017-11-24,155623,4.0,10.0,0.0
2017-11-25,115759,5.0,10.0,0.0
2017-11-26,128771,6.0,10.0,0.0
2017-11-27,199419,0.0,10.0,0.0
2017-11-28,207253,1.0,10.0,0.0
2017-11-29,205406,2.0,10.0,0.0
2017-11-30,200674,3.0,10.0,0.0
2017-12-01,187017,4.0,11.0,0.0
2017-12-02,129735,5.0,11.0,0.0
2017-12-03,139120,6.0,11.0,0.0
2017-12-04,205505,0.0,11.0,0.0
2017-12-05,208218,1.0,11.0,0.0
2017-12-06,202480,2.0,11.0,0.0
2017-12-07,197822,3.0,11.0,0.0
2017-12-08,180686,4.0,11.0,0.0
2017-12-09,123667,5.0,11.0,0.0
2017-12-10,130987,6.0,11.0,0.0
2017-12-11,193901,0.0,11.0,0.0
2017-12-12,194997,1.0,11.0,0.0
2017-12-13,192063,2.0,11.0,0.0
2017-12-14,186496,3.0,11.0,0.0
2017-12-15,170812,4.0,11.0,0.0
2017-12-16,110474,5.0,11.0,0.0
2017-12-17,118165,6.0,11.0,0.0
2017-12-18,176843,0.0,11.0,0.0
2017-12-19,179550,1.0,11.0,0.0
2017-12-20,173506,2.0,11.0,0.0
2017-12-21,165910,3.0,11.0,0.0
2017-12-22,145886,4.0,11.0,0.0
2017-12-23,95246,5.0,11.0,0.0
2017-12-24,88781,6.0,11.0,0.0
2017-12-25,98189,0.0,11.0,1.0
2017-12-26,121383,1.0,11.0,0.0
2017-12-27,135300,2.0,11.0,0.0
2017-12-28,136827,3.0,11.0,0.0
2017-12-29,127700,4.0,11.0,0.0
2017-12-30,93014,5.0,11.0,0.0
2017-12-31,82878,6.0,11.0,0.0
2018-01-01,86419,0.0,0.0,1.0
2018-01-02,147428,1.0,0.0,0.0
2018-01-03,162193,2.0,0.0,0.0
2018-01-04,163784,3.0,0.0,0.0
2018-01-05,158606,4.0,0.0,0.0
2018-01-06,113467,5.0,0.0,0.0
2018-01-07,118313,6.0,0.0,0.0
2018-01-08,175623,0.0,0.0,0.0
2018-01-09,183880,1.0,0.0,0.0
2018-01-10,183945,2.0,0.0,0.0
2018-01-11,181769,3.0,0.0,0.0
2018-01-12,170552,4.0,0.0,0.0
2018-01-13,115707,5.0,0.0,0.0
2018-01-14,121191,6.0,0.0,0.0
2018-01-15,176127,0.0,0.0,1.0
2018-01-16,188032,1.0,0.0,0.0
2018-01-17,189871,2.0,0.0,0.0
2018-01-18,189348,3.0,0.0,0.0
2018-01-19,177456,4.0,0.0,0.0
2018-01-20,123321,5.0,0.0,0.0
2018-01-21,128306,6.0,0.0,0.0
2018-01-22,186132,0.0,0.0,0.0
2018-01-23,197618,1.0,0.0,0.0
2018-01-24,196402,2.0,0.0,0.0
2018-01-25,192722,3.0,0.0,0.0
2018-01-26,179415,4.0,0.0,0.0
2018-01-27,125769,5.0,0.0,0.0
2018-01-28,133306,6.0,0.0,0.0
2018-01-29,194151,0.0,0.0,0.0
2018-01-30,198680,1.0,0.0,0.0
2018-01-31,198652,2.0,0.0,0.0
2018-02-01,195472,3.0,1.0,0.0
2018-02-02,183173,4.0,1.0,0.0
2018-02-03,124276,5.0,1.0,0.0
2018-02-04,129054,6.0,1.0,0.0
2018-02-05,190024,0.0,1.0,0.0
2018-02-06,198658,1.0,1.0,0.0
2018-02-07,198272,2.0,1.0,0.0
2018-02-08,195339,3.0,1.0,0.0
2018-02-09,183086,4.0,1.0,0.0
2018-02-10,122536,5.0,1.0,0.0
2018-02-11,133033,6.0,1.0,0.0
2018-02-12,185386,0.0,1.0,0.0
2018-02-13,184789,1.0,1.0,0.0
2018-02-14,176089,2.0,1.0,0.0
2018-02-15,171317,3.0,1.0,0.0
2018-02-16,162693,4.0,1.0,0.0
2018-02-17,116342,5.0,1.0,0.0
2018-02-18,122466,6.0,1.0,0.0
2018-02-19,172364,0.0,1.0,1.0
2018-02-20,185896,1.0,1.0,0.0
2018-02-21,188166,2.0,1.0,0.0
2018-02-22,189427,3.0,1.0,0.0
2018-02-23,178732,4.0,1.0,0.0
2018-02-24,132664,5.0,1.0,0.0
2018-02-25,134008,6.0,1.0,0.0
2018-02-26,200075,0.0,1.0,0.0
2018-02-27,207996,1.0,1.0,0.0
2018-02-28,204416,2.0,1.0,0.0
2018-03-01,201320,3.0,2.0,0.0
2018-03-02,188205,4.0,2.0,0.0
2018-03-03,131162,5.0,2.0,0.0
2018-03-04,138320,6.0,2.0,0.0
2018-03-05,207326,0.0,2.0,0.0
2018-03-06,212462,1.0,2.0,0.0
2018-03-07,209357,2.0,2.0,0.0
2018-03-08,194876,3.0,2.0,0.0
2018-03-09,193761,4.0,2.0,0.0
2018-03-10,133449,5.0,2.0,0.0
2018-03-11,142258,6.0,2.0,0.0
2018-03-12,208753,0.0,2.0,0.0
2018-03-13,210602,1.0,2.0,0.0
2018-03-14,214236,2.0,2.0,0.0
2018-03-15,210761,3.0,2.0,0.0
2018-03-16,196619,4.0,2.0,0.0
2018-03-17,133056,5.0,2.0,0.0
2018-03-18,141335,6.0,2.0,0.0
2018-03-19,211580,0.0,2.0,0.0
2018-03-20,219051,1.0,2.0,0.0
2018-03-21,215435,2.0,2.0,0.0
2018-03-22,211961,3.0,2.0,0.0
2018-03-23,196009,4.0,2.0,0.0
2018-03-24,132390,5.0,2.0,0.0
2018-03-25,140021,6.0,2.0,0.0
2018-03-26,205273,0.0,2.0,0.0
2018-03-27,212686,1.0,2.0,0.0
2018-03-28,210683,2.0,2.0,0.0
2018-03-29,189044,3.0,2.0,0.0
2018-03-30,170256,4.0,2.0,0.0
2018-03-31,125999,5.0,2.0,0.0
2018-04-01,126749,6.0,3.0,0.0
2018-04-02,186546,0.0,3.0,0.0
2018-04-03,207905,1.0,3.0,0.0
2018-04-04,201528,2.0,3.0,0.0
2018-04-05,188580,3.0,3.0,0.0
2018-04-06,173714,4.0,3.0,0.0
2018-04-07,125723,5.0,3.0,0.0
2018-04-08,142545,6.0,3.0,0.0
2018-04-09,204767,0.0,3.0,0.0
2018-04-10,212048,1.0,3.0,0.0
2018-04-11,210517,2.0,3.0,0.0
2018-04-12,206924,3.0,3.0,0.0
2018-04-13,191679,4.0,3.0,0.0
2018-04-14,126394,5.0,3.0,0.0
2018-04-15,137279,6.0,3.0,0.0
2018-04-16,208085,0.0,3.0,0.0
2018-04-17,213273,1.0,3.0,0.0
2018-04-18,211580,2.0,3.0,0.0
2018-04-19,206037,3.0,3.0,0.0
2018-04-20,191211,4.0,3.0,0.0
2018-04-21,125564,5.0,3.0,0.0
2018-04-22,136469,6.0,3.0,0.0
2018-04-23,206288,0.0,3.0,0.0
2018-04-24,212115,1.0,3.0,0.0
2018-04-25,207948,2.0,3.0,0.0
2018-04-26,205759,3.0,3.0,0.0
2018-04-27,181330,4.0,3.0,0.0
2018-04-28,130046,5.0,3.0,0.0
2018-04-29,120802,6.0,3.0,0.0
2018-04-30,170390,0.0,3.0,0.0
2018-05-01,169054,1.0,4.0,0.0
2018-05-02,197891,2.0,4.0,0.0
2018-05-03,199820,3.0,4.0,0.0
2018-05-04,186783,4.0,4.0,0.0
2018-05-05,124420,5.0,4.0,0.0
2018-05-06,130666,6.0,4.0,0.0
2018-05-07,196014,0.0,4.0,0.0
2018-05-08,203058,1.0,4.0,0.0
2018-05-09,198582,2.0,4.0,0.0
2018-05-10,191321,3.0,4.0,0.0
2018-05-11,183639,4.0,4.0,0.0
2018-05-12,122023,5.0,4.0,0.0
2018-05-13,128775,6.0,4.0,0.0
2018-05-14,199104,0.0,4.0,0.0
2018-05-15,200658,1.0,4.0,0.0
2018-05-16,201541,2.0,4.0,0.0
2018-05-17,196886,3.0,4.0,0.0
2018-05-18,188597,4.0,4.0,0.0
2018-05-19,121392,5.0,4.0,0.0
2018-05-20,126981,6.0,4.0,0.0
2018-05-21,189291,0.0,4.0,0.0
2018-05-22,203038,1.0,4.0,0.0
2018-05-23,205330,2.0,4.0,0.0
2018-05-24,199208,3.0,4.0,0.0
2018-05-25,187768,4.0,4.0,0.0
2018-05-26,117635,5.0,4.0,0.0
2018-05-27,124352,6.0,4.0,0.0
2018-05-28,180398,0.0,4.0,1.0
2018-05-29,194170,1.0,4.0,0.0
2018-05-30,200281,2.0,4.0,0.0
2018-05-31,197244,3.0,4.0,0.0
2018-06-01,184037,4.0,5.0,0.0
2018-06-02,121135,5.0,5.0,0.0
2018-06-03,129389,6.0,5.0,0.0
2018-06-04,200331,0.0,5.0,0.0
2018-06-05,207735,1.0,5.0,0.0
2018-06-06,203354,2.0,5.0,0.0
2018-06-07,200520,3.0,5.0,0.0
2018-06-08,182038,4.0,5.0,0.0
2018-06-09,120164,5.0,5.0,0.0
2018-06-10,125256,6.0,5.0,0.0
2018-06-11,194786,0.0,5.0,0.0
2018-06-12,200815,1.0,5.0,0.0
2018-06-13,197740,2.0,5.0,0.0
2018-06-14,192294,3.0,5.0,0.0
2018-06-15,173587,4.0,5.0,0.0
2018-06-16,105955,5.0,5.0,0.0
2018-06-17,110780,6.0,5.0,0.0
2018-06-18,174582,0.0,5.0,0.0
2018-06-19,193310,1.0,5.0,0.0
2018-06-20,193062,2.0,5.0,0.0
2018-06-21,187986,3.0,5.0,0.0
2018-06-22,173606,4.0,5.0,0.0
2018-06-23,111795,5.0,5.0,0.0
2018-06-24,116134,6.0,5.0,0.0
2018-06-25,185919,0.0,5.0,0.0
2018-06-26,193142,1.0,5.0,0.0
2018-06-27,188114,2.0,5.0,0.0
2018-06-28,183737,3.0,5.0,0.0
2018-06-29,171496,4.0,5.0,0.0
2018-06-30,107210,5.0,5.0,0.0
2018-07-01,111053,6.0,6.0,0.0
2018-07-02,176198,0.0,6.0,0.0
2018-07-03,184040,1.0,6.0,0.0
2018-07-04,169783,2.0,6.0,1.0
2018-07-05,177996,3.0,6.0,0.0
2018-07-06,167378,4.0,6.0,0.0
2018-07-07,106401,5.0,6.0,0.0
2018-07-08,112327,6.0,6.0,0.0
2018-07-09,182835,0.0,6.0,0.0
2018-07-10,187694,1.0,6.0,0.0
2018-07-11,185762,2.0,6.0,0.0
2018-07-12,184099,3.0,6.0,0.0
2018-07-13,170860,4.0,6.0,0.0
2018-07-14,106799,5.0,6.0,0.0
2018-07-15,108475,6.0,6.0,0.0
2018-07-16,175704,0.0,6.0,0.0
2018-07-17,183596,1.0,6.0,0.0
2018-07-18,179897,2.0,6.0,0.0
2018-07-19,183373,3.0,6.0,0.0
2018-07-20,169626,4.0,6.0,0.0
2018-07-21,106785,5.0,6.0,0.0
2018-07-22,112387,6.0,6.0,0.0
2018-07-23,180572,0.0,6.0,0.0
2018-07-24,186943,1.0,6.0,0.0
2018-07-25,185744,2.0,6.0,0.0
2018-07-26,183117,3.0,6.0,0.0
2018-07-27,168526,4.0,6.0,0.0
2018-07-28,105936,5.0,6.0,0.0
2018-07-29,111708,6.0,6.0,0.0
2018-07-30,179950,0.0,6.0,0.0
2018-07-31,185930,1.0,6.0,0.0
2018-08-01,183366,2.0,7.0,0.0
2018-08-02,182412,3.0,7.0,0.0
2018-08-03,173429,4.0,7.0,0.0
2018-08-04,106108,5.0,7.0,0.0
2018-08-05,110059,6.0,7.0,0.0
2018-08-06,178355,0.0,7.0,0.0
2018-08-07,185518,1.0,7.0,0.0
2018-08-08,183204,2.0,7.0,0.0
2018-08-09,181276,3.0,7.0,0.0
2018-08-10,168297,4.0,7.0,0.0
2018-08-11,106488,5.0,7.0,0.0
2018-08-12,111786,6.0,7.0,0.0
2018-08-13,178620,0.0,7.0,0.0
2018-08-14,181922,1.0,7.0,0.0
2018-08-15,172198,2.0,7.0,0.0
2018-08-16,177367,3.0,7.0,0.0
2018-08-17,166550,4.0,7.0,0.0
2018-08-18,107011,5.0,7.0,0.0
2018-08-19,112299,6.0,7.0,0.0
2018-08-20,176718,0.0,7.0,0.0
2018-08-21,182562,1.0,7.0,0.0
2018-08-22,181484,2.0,7.0,0.0
2018-08-23,180317,3.0,7.0,0.0
2018-08-24,170197,4.0,7.0,0.0
2018-08-25,109383,5.0,7.0,0.0
2018-08-26,113373,6.0,7.0,0.0
2018-08-27,180142,0.0,7.0,0.0
2018-08-28,191628,1.0,7.0,0.0
2018-08-29,191149,2.0,7.0,0.0
2018-08-30,187503,3.0,7.0,0.0
2018-08-31,172280,4.0,7.0,0.0
1 date count day_of_week month_of_year holiday
2 2017-06-04 104663 6.0 5.0 0.0
3 2017-06-05 155824 0.0 5.0 0.0
4 2017-06-06 164908 1.0 5.0 0.0
5 2017-06-07 170309 2.0 5.0 0.0
6 2017-06-08 164256 3.0 5.0 0.0
7 2017-06-09 153406 4.0 5.0 0.0
8 2017-06-10 97024 5.0 5.0 0.0
9 2017-06-11 103442 6.0 5.0 0.0
10 2017-06-12 160768 0.0 5.0 0.0
11 2017-06-13 166288 1.0 5.0 0.0
12 2017-06-14 163819 2.0 5.0 0.0
13 2017-06-15 157593 3.0 5.0 0.0
14 2017-06-16 149259 4.0 5.0 0.0
15 2017-06-17 95579 5.0 5.0 0.0
16 2017-06-18 98723 6.0 5.0 0.0
17 2017-06-19 159076 0.0 5.0 0.0
18 2017-06-20 163340 1.0 5.0 0.0
19 2017-06-21 163344 2.0 5.0 0.0
20 2017-06-22 159528 3.0 5.0 0.0
21 2017-06-23 146563 4.0 5.0 0.0
22 2017-06-24 92631 5.0 5.0 0.0
23 2017-06-25 96549 6.0 5.0 0.0
24 2017-06-26 153249 0.0 5.0 0.0
25 2017-06-27 160357 1.0 5.0 0.0
26 2017-06-28 159941 2.0 5.0 0.0
27 2017-06-29 156781 3.0 5.0 0.0
28 2017-06-30 144709 4.0 5.0 0.0
29 2017-07-01 89101 5.0 6.0 0.0
30 2017-07-02 93046 6.0 6.0 0.0
31 2017-07-03 144113 0.0 6.0 0.0
32 2017-07-04 143061 1.0 6.0 1.0
33 2017-07-05 154603 2.0 6.0 0.0
34 2017-07-06 157200 3.0 6.0 0.0
35 2017-07-07 147213 4.0 6.0 0.0
36 2017-07-08 92348 5.0 6.0 0.0
37 2017-07-09 97018 6.0 6.0 0.0
38 2017-07-10 157192 0.0 6.0 0.0
39 2017-07-11 161819 1.0 6.0 0.0
40 2017-07-12 161998 2.0 6.0 0.0
41 2017-07-13 160280 3.0 6.0 0.0
42 2017-07-14 146818 4.0 6.0 0.0
43 2017-07-15 93041 5.0 6.0 0.0
44 2017-07-16 97505 6.0 6.0 0.0
45 2017-07-17 156167 0.0 6.0 0.0
46 2017-07-18 162855 1.0 6.0 0.0
47 2017-07-19 162519 2.0 6.0 0.0
48 2017-07-20 159941 3.0 6.0 0.0
49 2017-07-21 148460 4.0 6.0 0.0
50 2017-07-22 93431 5.0 6.0 0.0
51 2017-07-23 98553 6.0 6.0 0.0
52 2017-07-24 156202 0.0 6.0 0.0
53 2017-07-25 162503 1.0 6.0 0.0
54 2017-07-26 158479 2.0 6.0 0.0
55 2017-07-27 158192 3.0 6.0 0.0
56 2017-07-28 147108 4.0 6.0 0.0
57 2017-07-29 93799 5.0 6.0 0.0
58 2017-07-30 97920 6.0 6.0 0.0
59 2017-07-31 152197 0.0 6.0 0.0
60 2017-08-01 158477 1.0 7.0 0.0
61 2017-08-02 159089 2.0 7.0 0.0
62 2017-08-03 157182 3.0 7.0 0.0
63 2017-08-04 146345 4.0 7.0 0.0
64 2017-08-05 92534 5.0 7.0 0.0
65 2017-08-06 97128 6.0 7.0 0.0
66 2017-08-07 151359 0.0 7.0 0.0
67 2017-08-08 159895 1.0 7.0 0.0
68 2017-08-09 158329 2.0 7.0 0.0
69 2017-08-10 155468 3.0 7.0 0.0
70 2017-08-11 144914 4.0 7.0 0.0
71 2017-08-12 92258 5.0 7.0 0.0
72 2017-08-13 95933 6.0 7.0 0.0
73 2017-08-14 147706 0.0 7.0 0.0
74 2017-08-15 151115 1.0 7.0 0.0
75 2017-08-16 157640 2.0 7.0 0.0
76 2017-08-17 156600 3.0 7.0 0.0
77 2017-08-18 146980 4.0 7.0 0.0
78 2017-08-19 94592 5.0 7.0 0.0
79 2017-08-20 99320 6.0 7.0 0.0
80 2017-08-21 145727 0.0 7.0 0.0
81 2017-08-22 160260 1.0 7.0 0.0
82 2017-08-23 160440 2.0 7.0 0.0
83 2017-08-24 157830 3.0 7.0 0.0
84 2017-08-25 145822 4.0 7.0 0.0
85 2017-08-26 94706 5.0 7.0 0.0
86 2017-08-27 99047 6.0 7.0 0.0
87 2017-08-28 152112 0.0 7.0 0.0
88 2017-08-29 162440 1.0 7.0 0.0
89 2017-08-30 162902 2.0 7.0 0.0
90 2017-08-31 159498 3.0 7.0 0.0
91 2017-09-01 145689 4.0 8.0 0.0
92 2017-09-02 93589 5.0 8.0 0.0
93 2017-09-03 100058 6.0 8.0 0.0
94 2017-09-04 140865 0.0 8.0 1.0
95 2017-09-05 165715 1.0 8.0 0.0
96 2017-09-06 167463 2.0 8.0 0.0
97 2017-09-07 164811 3.0 8.0 0.0
98 2017-09-08 156157 4.0 8.0 0.0
99 2017-09-09 101358 5.0 8.0 0.0
100 2017-09-10 107915 6.0 8.0 0.0
101 2017-09-11 167845 0.0 8.0 0.0
102 2017-09-12 172756 1.0 8.0 0.0
103 2017-09-13 172851 2.0 8.0 0.0
104 2017-09-14 171675 3.0 8.0 0.0
105 2017-09-15 159266 4.0 8.0 0.0
106 2017-09-16 103547 5.0 8.0 0.0
107 2017-09-17 110964 6.0 8.0 0.0
108 2017-09-18 170976 0.0 8.0 0.0
109 2017-09-19 177864 1.0 8.0 0.0
110 2017-09-20 173567 2.0 8.0 0.0
111 2017-09-21 172017 3.0 8.0 0.0
112 2017-09-22 161357 4.0 8.0 0.0
113 2017-09-23 104681 5.0 8.0 0.0
114 2017-09-24 111711 6.0 8.0 0.0
115 2017-09-25 173517 0.0 8.0 0.0
116 2017-09-26 180049 1.0 8.0 0.0
117 2017-09-27 178307 2.0 8.0 0.0
118 2017-09-28 174157 3.0 8.0 0.0
119 2017-09-29 161707 4.0 8.0 0.0
120 2017-09-30 110536 5.0 8.0 0.0
121 2017-10-01 106505 6.0 9.0 0.0
122 2017-10-02 157565 0.0 9.0 0.0
123 2017-10-03 164764 1.0 9.0 0.0
124 2017-10-04 163383 2.0 9.0 0.0
125 2017-10-05 162847 3.0 9.0 0.0
126 2017-10-06 153575 4.0 9.0 0.0
127 2017-10-07 107472 5.0 9.0 0.0
128 2017-10-08 116127 6.0 9.0 0.0
129 2017-10-09 174457 0.0 9.0 1.0
130 2017-10-10 185217 1.0 9.0 0.0
131 2017-10-11 185120 2.0 9.0 0.0
132 2017-10-12 180844 3.0 9.0 0.0
133 2017-10-13 170178 4.0 9.0 0.0
134 2017-10-14 112754 5.0 9.0 0.0
135 2017-10-15 121251 6.0 9.0 0.0
136 2017-10-16 183906 0.0 9.0 0.0
137 2017-10-17 188945 1.0 9.0 0.0
138 2017-10-18 187297 2.0 9.0 0.0
139 2017-10-19 183867 3.0 9.0 0.0
140 2017-10-20 173021 4.0 9.0 0.0
141 2017-10-21 115851 5.0 9.0 0.0
142 2017-10-22 126088 6.0 9.0 0.0
143 2017-10-23 189452 0.0 9.0 0.0
144 2017-10-24 194412 1.0 9.0 0.0
145 2017-10-25 192293 2.0 9.0 0.0
146 2017-10-26 190163 3.0 9.0 0.0
147 2017-10-27 177053 4.0 9.0 0.0
148 2017-10-28 114934 5.0 9.0 0.0
149 2017-10-29 125289 6.0 9.0 0.0
150 2017-10-30 189245 0.0 9.0 0.0
151 2017-10-31 191480 1.0 9.0 0.0
152 2017-11-01 182281 2.0 10.0 0.0
153 2017-11-02 186351 3.0 10.0 0.0
154 2017-11-03 175422 4.0 10.0 0.0
155 2017-11-04 118160 5.0 10.0 0.0
156 2017-11-05 127602 6.0 10.0 0.0
157 2017-11-06 191067 0.0 10.0 0.0
158 2017-11-07 197083 1.0 10.0 0.0
159 2017-11-08 194333 2.0 10.0 0.0
160 2017-11-09 193914 3.0 10.0 0.0
161 2017-11-10 179933 4.0 10.0 1.0
162 2017-11-11 121346 5.0 10.0 0.0
163 2017-11-12 131900 6.0 10.0 0.0
164 2017-11-13 196969 0.0 10.0 0.0
165 2017-11-14 201949 1.0 10.0 0.0
166 2017-11-15 198424 2.0 10.0 0.0
167 2017-11-16 196902 3.0 10.0 0.0
168 2017-11-17 183893 4.0 10.0 0.0
169 2017-11-18 122767 5.0 10.0 0.0
170 2017-11-19 130890 6.0 10.0 0.0
171 2017-11-20 194515 0.0 10.0 0.0
172 2017-11-21 198601 1.0 10.0 0.0
173 2017-11-22 191041 2.0 10.0 0.0
174 2017-11-23 170321 3.0 10.0 1.0
175 2017-11-24 155623 4.0 10.0 0.0
176 2017-11-25 115759 5.0 10.0 0.0
177 2017-11-26 128771 6.0 10.0 0.0
178 2017-11-27 199419 0.0 10.0 0.0
179 2017-11-28 207253 1.0 10.0 0.0
180 2017-11-29 205406 2.0 10.0 0.0
181 2017-11-30 200674 3.0 10.0 0.0
182 2017-12-01 187017 4.0 11.0 0.0
183 2017-12-02 129735 5.0 11.0 0.0
184 2017-12-03 139120 6.0 11.0 0.0
185 2017-12-04 205505 0.0 11.0 0.0
186 2017-12-05 208218 1.0 11.0 0.0
187 2017-12-06 202480 2.0 11.0 0.0
188 2017-12-07 197822 3.0 11.0 0.0
189 2017-12-08 180686 4.0 11.0 0.0
190 2017-12-09 123667 5.0 11.0 0.0
191 2017-12-10 130987 6.0 11.0 0.0
192 2017-12-11 193901 0.0 11.0 0.0
193 2017-12-12 194997 1.0 11.0 0.0
194 2017-12-13 192063 2.0 11.0 0.0
195 2017-12-14 186496 3.0 11.0 0.0
196 2017-12-15 170812 4.0 11.0 0.0
197 2017-12-16 110474 5.0 11.0 0.0
198 2017-12-17 118165 6.0 11.0 0.0
199 2017-12-18 176843 0.0 11.0 0.0
200 2017-12-19 179550 1.0 11.0 0.0
201 2017-12-20 173506 2.0 11.0 0.0
202 2017-12-21 165910 3.0 11.0 0.0
203 2017-12-22 145886 4.0 11.0 0.0
204 2017-12-23 95246 5.0 11.0 0.0
205 2017-12-24 88781 6.0 11.0 0.0
206 2017-12-25 98189 0.0 11.0 1.0
207 2017-12-26 121383 1.0 11.0 0.0
208 2017-12-27 135300 2.0 11.0 0.0
209 2017-12-28 136827 3.0 11.0 0.0
210 2017-12-29 127700 4.0 11.0 0.0
211 2017-12-30 93014 5.0 11.0 0.0
212 2017-12-31 82878 6.0 11.0 0.0
213 2018-01-01 86419 0.0 0.0 1.0
214 2018-01-02 147428 1.0 0.0 0.0
215 2018-01-03 162193 2.0 0.0 0.0
216 2018-01-04 163784 3.0 0.0 0.0
217 2018-01-05 158606 4.0 0.0 0.0
218 2018-01-06 113467 5.0 0.0 0.0
219 2018-01-07 118313 6.0 0.0 0.0
220 2018-01-08 175623 0.0 0.0 0.0
221 2018-01-09 183880 1.0 0.0 0.0
222 2018-01-10 183945 2.0 0.0 0.0
223 2018-01-11 181769 3.0 0.0 0.0
224 2018-01-12 170552 4.0 0.0 0.0
225 2018-01-13 115707 5.0 0.0 0.0
226 2018-01-14 121191 6.0 0.0 0.0
227 2018-01-15 176127 0.0 0.0 1.0
228 2018-01-16 188032 1.0 0.0 0.0
229 2018-01-17 189871 2.0 0.0 0.0
230 2018-01-18 189348 3.0 0.0 0.0
231 2018-01-19 177456 4.0 0.0 0.0
232 2018-01-20 123321 5.0 0.0 0.0
233 2018-01-21 128306 6.0 0.0 0.0
234 2018-01-22 186132 0.0 0.0 0.0
235 2018-01-23 197618 1.0 0.0 0.0
236 2018-01-24 196402 2.0 0.0 0.0
237 2018-01-25 192722 3.0 0.0 0.0
238 2018-01-26 179415 4.0 0.0 0.0
239 2018-01-27 125769 5.0 0.0 0.0
240 2018-01-28 133306 6.0 0.0 0.0
241 2018-01-29 194151 0.0 0.0 0.0
242 2018-01-30 198680 1.0 0.0 0.0
243 2018-01-31 198652 2.0 0.0 0.0
244 2018-02-01 195472 3.0 1.0 0.0
245 2018-02-02 183173 4.0 1.0 0.0
246 2018-02-03 124276 5.0 1.0 0.0
247 2018-02-04 129054 6.0 1.0 0.0
248 2018-02-05 190024 0.0 1.0 0.0
249 2018-02-06 198658 1.0 1.0 0.0
250 2018-02-07 198272 2.0 1.0 0.0
251 2018-02-08 195339 3.0 1.0 0.0
252 2018-02-09 183086 4.0 1.0 0.0
253 2018-02-10 122536 5.0 1.0 0.0
254 2018-02-11 133033 6.0 1.0 0.0
255 2018-02-12 185386 0.0 1.0 0.0
256 2018-02-13 184789 1.0 1.0 0.0
257 2018-02-14 176089 2.0 1.0 0.0
258 2018-02-15 171317 3.0 1.0 0.0
259 2018-02-16 162693 4.0 1.0 0.0
260 2018-02-17 116342 5.0 1.0 0.0
261 2018-02-18 122466 6.0 1.0 0.0
262 2018-02-19 172364 0.0 1.0 1.0
263 2018-02-20 185896 1.0 1.0 0.0
264 2018-02-21 188166 2.0 1.0 0.0
265 2018-02-22 189427 3.0 1.0 0.0
266 2018-02-23 178732 4.0 1.0 0.0
267 2018-02-24 132664 5.0 1.0 0.0
268 2018-02-25 134008 6.0 1.0 0.0
269 2018-02-26 200075 0.0 1.0 0.0
270 2018-02-27 207996 1.0 1.0 0.0
271 2018-02-28 204416 2.0 1.0 0.0
272 2018-03-01 201320 3.0 2.0 0.0
273 2018-03-02 188205 4.0 2.0 0.0
274 2018-03-03 131162 5.0 2.0 0.0
275 2018-03-04 138320 6.0 2.0 0.0
276 2018-03-05 207326 0.0 2.0 0.0
277 2018-03-06 212462 1.0 2.0 0.0
278 2018-03-07 209357 2.0 2.0 0.0
279 2018-03-08 194876 3.0 2.0 0.0
280 2018-03-09 193761 4.0 2.0 0.0
281 2018-03-10 133449 5.0 2.0 0.0
282 2018-03-11 142258 6.0 2.0 0.0
283 2018-03-12 208753 0.0 2.0 0.0
284 2018-03-13 210602 1.0 2.0 0.0
285 2018-03-14 214236 2.0 2.0 0.0
286 2018-03-15 210761 3.0 2.0 0.0
287 2018-03-16 196619 4.0 2.0 0.0
288 2018-03-17 133056 5.0 2.0 0.0
289 2018-03-18 141335 6.0 2.0 0.0
290 2018-03-19 211580 0.0 2.0 0.0
291 2018-03-20 219051 1.0 2.0 0.0
292 2018-03-21 215435 2.0 2.0 0.0
293 2018-03-22 211961 3.0 2.0 0.0
294 2018-03-23 196009 4.0 2.0 0.0
295 2018-03-24 132390 5.0 2.0 0.0
296 2018-03-25 140021 6.0 2.0 0.0
297 2018-03-26 205273 0.0 2.0 0.0
298 2018-03-27 212686 1.0 2.0 0.0
299 2018-03-28 210683 2.0 2.0 0.0
300 2018-03-29 189044 3.0 2.0 0.0
301 2018-03-30 170256 4.0 2.0 0.0
302 2018-03-31 125999 5.0 2.0 0.0
303 2018-04-01 126749 6.0 3.0 0.0
304 2018-04-02 186546 0.0 3.0 0.0
305 2018-04-03 207905 1.0 3.0 0.0
306 2018-04-04 201528 2.0 3.0 0.0
307 2018-04-05 188580 3.0 3.0 0.0
308 2018-04-06 173714 4.0 3.0 0.0
309 2018-04-07 125723 5.0 3.0 0.0
310 2018-04-08 142545 6.0 3.0 0.0
311 2018-04-09 204767 0.0 3.0 0.0
312 2018-04-10 212048 1.0 3.0 0.0
313 2018-04-11 210517 2.0 3.0 0.0
314 2018-04-12 206924 3.0 3.0 0.0
315 2018-04-13 191679 4.0 3.0 0.0
316 2018-04-14 126394 5.0 3.0 0.0
317 2018-04-15 137279 6.0 3.0 0.0
318 2018-04-16 208085 0.0 3.0 0.0
319 2018-04-17 213273 1.0 3.0 0.0
320 2018-04-18 211580 2.0 3.0 0.0
321 2018-04-19 206037 3.0 3.0 0.0
322 2018-04-20 191211 4.0 3.0 0.0
323 2018-04-21 125564 5.0 3.0 0.0
324 2018-04-22 136469 6.0 3.0 0.0
325 2018-04-23 206288 0.0 3.0 0.0
326 2018-04-24 212115 1.0 3.0 0.0
327 2018-04-25 207948 2.0 3.0 0.0
328 2018-04-26 205759 3.0 3.0 0.0
329 2018-04-27 181330 4.0 3.0 0.0
330 2018-04-28 130046 5.0 3.0 0.0
331 2018-04-29 120802 6.0 3.0 0.0
332 2018-04-30 170390 0.0 3.0 0.0
333 2018-05-01 169054 1.0 4.0 0.0
334 2018-05-02 197891 2.0 4.0 0.0
335 2018-05-03 199820 3.0 4.0 0.0
336 2018-05-04 186783 4.0 4.0 0.0
337 2018-05-05 124420 5.0 4.0 0.0
338 2018-05-06 130666 6.0 4.0 0.0
339 2018-05-07 196014 0.0 4.0 0.0
340 2018-05-08 203058 1.0 4.0 0.0
341 2018-05-09 198582 2.0 4.0 0.0
342 2018-05-10 191321 3.0 4.0 0.0
343 2018-05-11 183639 4.0 4.0 0.0
344 2018-05-12 122023 5.0 4.0 0.0
345 2018-05-13 128775 6.0 4.0 0.0
346 2018-05-14 199104 0.0 4.0 0.0
347 2018-05-15 200658 1.0 4.0 0.0
348 2018-05-16 201541 2.0 4.0 0.0
349 2018-05-17 196886 3.0 4.0 0.0
350 2018-05-18 188597 4.0 4.0 0.0
351 2018-05-19 121392 5.0 4.0 0.0
352 2018-05-20 126981 6.0 4.0 0.0
353 2018-05-21 189291 0.0 4.0 0.0
354 2018-05-22 203038 1.0 4.0 0.0
355 2018-05-23 205330 2.0 4.0 0.0
356 2018-05-24 199208 3.0 4.0 0.0
357 2018-05-25 187768 4.0 4.0 0.0
358 2018-05-26 117635 5.0 4.0 0.0
359 2018-05-27 124352 6.0 4.0 0.0
360 2018-05-28 180398 0.0 4.0 1.0
361 2018-05-29 194170 1.0 4.0 0.0
362 2018-05-30 200281 2.0 4.0 0.0
363 2018-05-31 197244 3.0 4.0 0.0
364 2018-06-01 184037 4.0 5.0 0.0
365 2018-06-02 121135 5.0 5.0 0.0
366 2018-06-03 129389 6.0 5.0 0.0
367 2018-06-04 200331 0.0 5.0 0.0
368 2018-06-05 207735 1.0 5.0 0.0
369 2018-06-06 203354 2.0 5.0 0.0
370 2018-06-07 200520 3.0 5.0 0.0
371 2018-06-08 182038 4.0 5.0 0.0
372 2018-06-09 120164 5.0 5.0 0.0
373 2018-06-10 125256 6.0 5.0 0.0
374 2018-06-11 194786 0.0 5.0 0.0
375 2018-06-12 200815 1.0 5.0 0.0
376 2018-06-13 197740 2.0 5.0 0.0
377 2018-06-14 192294 3.0 5.0 0.0
378 2018-06-15 173587 4.0 5.0 0.0
379 2018-06-16 105955 5.0 5.0 0.0
380 2018-06-17 110780 6.0 5.0 0.0
381 2018-06-18 174582 0.0 5.0 0.0
382 2018-06-19 193310 1.0 5.0 0.0
383 2018-06-20 193062 2.0 5.0 0.0
384 2018-06-21 187986 3.0 5.0 0.0
385 2018-06-22 173606 4.0 5.0 0.0
386 2018-06-23 111795 5.0 5.0 0.0
387 2018-06-24 116134 6.0 5.0 0.0
388 2018-06-25 185919 0.0 5.0 0.0
389 2018-06-26 193142 1.0 5.0 0.0
390 2018-06-27 188114 2.0 5.0 0.0
391 2018-06-28 183737 3.0 5.0 0.0
392 2018-06-29 171496 4.0 5.0 0.0
393 2018-06-30 107210 5.0 5.0 0.0
394 2018-07-01 111053 6.0 6.0 0.0
395 2018-07-02 176198 0.0 6.0 0.0
396 2018-07-03 184040 1.0 6.0 0.0
397 2018-07-04 169783 2.0 6.0 1.0
398 2018-07-05 177996 3.0 6.0 0.0
399 2018-07-06 167378 4.0 6.0 0.0
400 2018-07-07 106401 5.0 6.0 0.0
401 2018-07-08 112327 6.0 6.0 0.0
402 2018-07-09 182835 0.0 6.0 0.0
403 2018-07-10 187694 1.0 6.0 0.0
404 2018-07-11 185762 2.0 6.0 0.0
405 2018-07-12 184099 3.0 6.0 0.0
406 2018-07-13 170860 4.0 6.0 0.0
407 2018-07-14 106799 5.0 6.0 0.0
408 2018-07-15 108475 6.0 6.0 0.0
409 2018-07-16 175704 0.0 6.0 0.0
410 2018-07-17 183596 1.0 6.0 0.0
411 2018-07-18 179897 2.0 6.0 0.0
412 2018-07-19 183373 3.0 6.0 0.0
413 2018-07-20 169626 4.0 6.0 0.0
414 2018-07-21 106785 5.0 6.0 0.0
415 2018-07-22 112387 6.0 6.0 0.0
416 2018-07-23 180572 0.0 6.0 0.0
417 2018-07-24 186943 1.0 6.0 0.0
418 2018-07-25 185744 2.0 6.0 0.0
419 2018-07-26 183117 3.0 6.0 0.0
420 2018-07-27 168526 4.0 6.0 0.0
421 2018-07-28 105936 5.0 6.0 0.0
422 2018-07-29 111708 6.0 6.0 0.0
423 2018-07-30 179950 0.0 6.0 0.0
424 2018-07-31 185930 1.0 6.0 0.0
425 2018-08-01 183366 2.0 7.0 0.0
426 2018-08-02 182412 3.0 7.0 0.0
427 2018-08-03 173429 4.0 7.0 0.0
428 2018-08-04 106108 5.0 7.0 0.0
429 2018-08-05 110059 6.0 7.0 0.0
430 2018-08-06 178355 0.0 7.0 0.0
431 2018-08-07 185518 1.0 7.0 0.0
432 2018-08-08 183204 2.0 7.0 0.0
433 2018-08-09 181276 3.0 7.0 0.0
434 2018-08-10 168297 4.0 7.0 0.0
435 2018-08-11 106488 5.0 7.0 0.0
436 2018-08-12 111786 6.0 7.0 0.0
437 2018-08-13 178620 0.0 7.0 0.0
438 2018-08-14 181922 1.0 7.0 0.0
439 2018-08-15 172198 2.0 7.0 0.0
440 2018-08-16 177367 3.0 7.0 0.0
441 2018-08-17 166550 4.0 7.0 0.0
442 2018-08-18 107011 5.0 7.0 0.0
443 2018-08-19 112299 6.0 7.0 0.0
444 2018-08-20 176718 0.0 7.0 0.0
445 2018-08-21 182562 1.0 7.0 0.0
446 2018-08-22 181484 2.0 7.0 0.0
447 2018-08-23 180317 3.0 7.0 0.0
448 2018-08-24 170197 4.0 7.0 0.0
449 2018-08-25 109383 5.0 7.0 0.0
450 2018-08-26 113373 6.0 7.0 0.0
451 2018-08-27 180142 0.0 7.0 0.0
452 2018-08-28 191628 1.0 7.0 0.0
453 2018-08-29 191149 2.0 7.0 0.0
454 2018-08-30 187503 3.0 7.0 0.0
455 2018-08-31 172280 4.0 7.0 0.0

View File

@@ -79,9 +79,7 @@ def get_result_df(remote_run):
if "goal" in run.properties:
goal_minimize = run.properties["goal"].split("_")[-1] == "min"
summary_df = summary_df.T.sort_values(
"Score", ascending=goal_minimize
).drop_duplicates(["run_algorithm"])
summary_df = summary_df.T.sort_values("Score", ascending=goal_minimize)
summary_df = summary_df.set_index("run_algorithm")
return summary_df
@@ -105,13 +103,8 @@ def run_inference(
train_run.download_file(
"outputs/{}".format(model_base_name), "inference/{}".format(model_base_name)
)
train_run.download_file("outputs/conda_env_v_1_0_0.yml", "inference/condafile.yml")
inference_env = Environment("myenv")
inference_env.docker.enabled = True
inference_env.python.conda_dependencies = CondaDependencies(
conda_dependencies_file_path="inference/condafile.yml"
)
inference_env = train_run.get_environment()
est = Estimator(
source_directory=script_folder,

View File

@@ -78,7 +78,8 @@
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Default datastore name\"] = dstore.name\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
@@ -381,7 +382,7 @@
"metadata": {},
"source": [
"### Submit the pipeline to run\n",
"Next we submit our pipeline to run. The whole training pipeline takes about 1h 11m using a Standard_D12_V2 VM with our current ParallelRunConfig setting."
"Next we submit our pipeline to run. The whole training pipeline takes about 1h using a Standard_D16_V3 VM with our current ParallelRunConfig setting."
]
},
{
@@ -571,7 +572,7 @@
"source": [
"## Retrieve results\n",
"\n",
"Forecast results can be retrieved through the following code. The prediction results summary and the actual predictions are downloaded the \"forecast_results\" folder"
"Forecast results can be retrieved through the following code. The prediction results summary and the actual predictions are downloaded in forecast_results folder"
]
},
{

View File

@@ -30,7 +30,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"For this notebook we are using a synthetic dataset portraying sales data to predict the quantity of a vartiety of product SKUs across several states, stores, and product categories.\n",
"For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a vartiety of product skus across several states, stores, and product categories.\n",
"\n",
"**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**"
]
@@ -78,7 +78,8 @@
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Default datastore name\"] = dstore.name\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
@@ -308,7 +309,7 @@
"source": [
"### Set up training parameters\n",
"\n",
"This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name definition.\n",
"This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings inncluding the name of the time column, the maximum forecast horizon, and the partition column name definition.\n",
"\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
@@ -324,7 +325,7 @@
"| **enable_early_stopping** | Flag to enable early termination if the score is not improving in the short term. |\n",
"| **time_column_name** | The name of your time column. |\n",
"| **enable_engineered_explanations** | Engineered feature explanations will be downloaded if enable_engineered_explanations flag is set to True. By default it is set to False to save storage space. |\n",
"| **time_series_id_column_name** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n",
"| **time_series_id_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n",
"| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n",
"| **pipeline_fetch_max_batch_size** | Determines how many pipelines (training algorithms) to fetch at a time for training, this helps reduce throttling when training at large scale. |\n",
"| **partition_column_names** | The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series. |"
@@ -355,8 +356,8 @@
" \"n_cross_validations\": 3,\n",
" \"time_column_name\": \"WeekStarting\",\n",
" \"drop_column_names\": \"Revenue\",\n",
" \"max_horizon\": 6,\n",
" \"grain_column_names\": partition_column_names,\n",
" \"forecast_horizon\": 6,\n",
" \"time_series_id_column_names\": partition_column_names,\n",
" \"track_child_runs\": False,\n",
"}\n",
"\n",
@@ -554,12 +555,12 @@
"| :--------------- | :------------------- |\n",
"| **experiment** | The experiment used for inference run. |\n",
"| **inference_data** | The data to use for inferencing. It should be the same schema as used for training.\n",
"| **compute_target** | The compute target that runs the inference pipeline.|\n",
"| **compute_target** The compute target that runs the inference pipeline.|\n",
"| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n",
"| **process_count_per_node** | The number of processes per node.\n",
"| **train_run_id** | \\[Optional\\] The run id of the hierarchy training, by default it is the latest successful training many model run in the experiment. |\n",
"| **train_experiment_name** | \\[Optional\\] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n",
"| **process_count_per_node** | \\[Optional\\] The number of processes per node, by default it's 4. |"
"| **process_count_per_node** The number of processes per node.\n",
"| **train_run_id** | \\[Optional] The run id of the hierarchy training, by default it is the latest successful training many model run in the experiment. |\n",
"| **train_experiment_name** | \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n",
"| **process_count_per_node** | \\[Optional] The number of processes per node, by default it's 4. |"
]
},
{

View File

@@ -58,21 +58,22 @@
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import pandas as pd\n",
"import json\n",
"import logging\n",
"\n",
"from azureml.core.workspace import Workspace\n",
"import azureml.core\n",
"import pandas as pd\n",
"from azureml.automl.core.featurization import FeaturizationConfig\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.automl.core.featurization import FeaturizationConfig"
"from azureml.core.workspace import Workspace\n",
"from azureml.train.automl import AutoMLConfig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
"This notebook is compatible with Azure ML SDK version 1.35.0 or later."
]
},
{
@@ -81,7 +82,6 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -112,7 +112,8 @@
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
@@ -472,8 +473,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"Each run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:"
"### Retrieve the Best Run details\n",
"Below we retrieve the best Run object from among all the runs in the experiment."
]
},
{
@@ -482,9 +483,9 @@
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"print(fitted_model.steps)\n",
"model_name = best_run.properties[\"model_name\"]"
"best_run = remote_run.get_best_child()\n",
"model_name = best_run.properties[\"model_name\"]\n",
"best_run"
]
},
{
@@ -502,16 +503,26 @@
"metadata": {},
"outputs": [],
"source": [
"custom_featurizer = fitted_model.named_steps[\"timeseriestransformer\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"custom_featurizer.get_featurization_summary()"
"# Download the featurization summary JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n",
")\n",
"\n",
"# Render the JSON as a pandas DataFrame\n",
"with open(\"featurization_summary.json\", \"r\") as f:\n",
" records = json.load(f)\n",
"fs = pd.DataFrame.from_records(records)\n",
"\n",
"# View a summary of the featurization\n",
"fs[\n",
" [\n",
" \"RawFeatureName\",\n",
" \"TypeDetected\",\n",
" \"Dropped\",\n",
" \"EngineeredFeatureCount\",\n",
" \"Transformations\",\n",
" ]\n",
"]"
]
},
{
@@ -538,7 +549,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retreiving forecasts from the model\n",
"### Retrieving forecasts from the model\n",
"We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute."
]
},

View File

@@ -229,7 +229,7 @@
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"pd.set_option(\"display.max_colwidth\", -1)\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"print(outputDf.T)"
]
@@ -387,8 +387,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the best model\n",
"Below we select the best model from all the training iterations using get_output method."
"### Retrieve the Best Run details\n",
"Below we retrieve the best Run object from among all the runs in the experiment."
]
},
{
@@ -397,8 +397,8 @@
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = remote_run.get_output()\n",
"fitted_model.steps"
"best_run = remote_run.get_best_child()\n",
"best_run"
]
},
{

View File

@@ -46,11 +46,11 @@ def kpss_test(series, **kw):
"""
if kw["store"]:
statistic, p_value, critical_values, rstore = stattools.kpss(
series, regression=kw["reg_type"], lags=kw["lags"], store=kw["store"]
series, regression=kw["reg_type"], nlags=kw["lags"], store=kw["store"]
)
else:
statistic, p_value, lags, critical_values = stattools.kpss(
series, regression=kw["reg_type"], lags=kw["lags"]
series, regression=kw["reg_type"], nlags=kw["lags"]
)
output = {
"statistic": statistic,

View File

@@ -1,21 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -90,16 +74,6 @@
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -109,18 +83,19 @@
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for experiment\n",
"experiment_name = 'automl-classification-ccard-local'\n",
"experiment_name = \"automl-classification-ccard-local\"\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Experiment Name\"] = experiment.name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
@@ -142,7 +117,7 @@
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n",
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
"training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n",
"label_column_name = 'Class'"
"label_column_name = \"Class\""
]
},
{
@@ -168,22 +143,25 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "enable-ensemble"
},
"outputs": [],
"source": [
"automl_settings = {\n",
" \"n_cross_validations\": 3,\n",
" \"primary_metric\": 'AUC_weighted',\n",
" \"primary_metric\": \"average_precision_score_weighted\",\n",
" \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible\n",
" \"verbosity\": logging.INFO,\n",
" \"enable_stack_ensemble\": False\n",
" \"enable_stack_ensemble\": False,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'classification',\n",
" debug_log = 'automl_errors.log',\n",
"automl_config = AutoMLConfig(\n",
" task=\"classification\",\n",
" debug_log=\"automl_errors.log\",\n",
" training_data=training_data,\n",
" label_column_name=label_column_name,\n",
" **automl_settings\n",
" **automl_settings,\n",
")"
]
},
@@ -240,6 +218,7 @@
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(local_run).show()"
]
},
@@ -288,8 +267,12 @@
"outputs": [],
"source": [
"# convert the test data to dataframe\n",
"X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe()\n",
"y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe()"
"X_test_df = validation_data.drop_columns(\n",
" columns=[label_column_name]\n",
").to_pandas_dataframe()\n",
"y_test_df = validation_data.keep_columns(\n",
" columns=[label_column_name], validate=True\n",
").to_pandas_dataframe()"
]
},
{
@@ -324,19 +307,25 @@
"import itertools\n",
"\n",
"cf = confusion_matrix(y_test_df.values, y_pred)\n",
"plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')\n",
"plt.imshow(cf, cmap=plt.cm.Blues, interpolation=\"nearest\")\n",
"plt.colorbar()\n",
"plt.title('Confusion Matrix')\n",
"plt.xlabel('Predicted')\n",
"plt.ylabel('Actual')\n",
"class_labels = ['False','True']\n",
"plt.title(\"Confusion Matrix\")\n",
"plt.xlabel(\"Predicted\")\n",
"plt.ylabel(\"Actual\")\n",
"class_labels = [\"False\", \"True\"]\n",
"tick_marks = np.arange(len(class_labels))\n",
"plt.xticks(tick_marks, class_labels)\n",
"plt.yticks([-0.5,0,1,1.5],['','False','True',''])\n",
"plt.yticks([-0.5, 0, 1, 1.5], [\"\", \"False\", \"True\", \"\"])\n",
"# plotting text value inside cells\n",
"thresh = cf.max() / 2.\n",
"thresh = cf.max() / 2.0\n",
"for i, j in itertools.product(range(cf.shape[0]), range(cf.shape[1])):\n",
" plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')\n",
" plt.text(\n",
" j,\n",
" i,\n",
" format(cf[i, j], \"d\"),\n",
" horizontalalignment=\"center\",\n",
" color=\"white\" if cf[i, j] > thresh else \"black\",\n",
" )\n",
"plt.show()"
]
},
@@ -363,7 +352,10 @@
"client = ExplanationClient.from_run(best_run)\n",
"engineered_explanations = client.download_model_explanation(raw=False)\n",
"print(engineered_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
"print(\n",
" \"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
" + best_run.get_portal_url()\n",
")"
]
},
{
@@ -382,7 +374,10 @@
"source": [
"raw_explanations = client.download_model_explanation(raw=True)\n",
"print(raw_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
"print(\n",
" \"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
" + best_run.get_portal_url()\n",
")"
]
},
{
@@ -398,7 +393,7 @@
"metadata": {},
"outputs": [],
"source": [
"automl_run, fitted_model = local_run.get_output(metric='accuracy')"
"automl_run, fitted_model = local_run.get_output(metric=\"accuracy\")"
]
},
{
@@ -432,12 +427,18 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations\n",
"from azureml.train.automl.runtime.automl_explain_utilities import (\n",
" automl_setup_model_explanations,\n",
")\n",
"\n",
"automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, \n",
" X_test=X_test, y=y_train, \n",
" task='classification',\n",
" automl_run=automl_run)"
"automl_explainer_setup_obj = automl_setup_model_explanations(\n",
" fitted_model,\n",
" X=X_train,\n",
" X_test=X_test,\n",
" y=y_train,\n",
" task=\"classification\",\n",
" automl_run=automl_run,\n",
")"
]
},
{
@@ -455,13 +456,18 @@
"outputs": [],
"source": [
"from azureml.interpret.mimic_wrapper import MimicWrapper\n",
"explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator,\n",
"\n",
"explainer = MimicWrapper(\n",
" ws,\n",
" automl_explainer_setup_obj.automl_estimator,\n",
" explainable_model=automl_explainer_setup_obj.surrogate_model,\n",
" init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run,\n",
" init_dataset=automl_explainer_setup_obj.X_transform,\n",
" run=automl_explainer_setup_obj.automl_run,\n",
" features=automl_explainer_setup_obj.engineered_feature_names,\n",
" feature_maps=[automl_explainer_setup_obj.feature_map],\n",
" classes=automl_explainer_setup_obj.classes,\n",
" explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params)"
" explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params,\n",
")"
]
},
{
@@ -479,9 +485,14 @@
"outputs": [],
"source": [
"# Compute the engineered explanations\n",
"engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform)\n",
"engineered_explanations = explainer.explain(\n",
" [\"local\", \"global\"], eval_dataset=automl_explainer_setup_obj.X_test_transform\n",
")\n",
"print(engineered_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
"print(\n",
" \"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
" + automl_run.get_portal_url()\n",
")"
]
},
{
@@ -499,12 +510,18 @@
"outputs": [],
"source": [
"# Compute the raw explanations\n",
"raw_explanations = explainer.explain(['local', 'global'], get_raw=True,\n",
"raw_explanations = explainer.explain(\n",
" [\"local\", \"global\"],\n",
" get_raw=True,\n",
" raw_feature_names=automl_explainer_setup_obj.raw_feature_names,\n",
" eval_dataset=automl_explainer_setup_obj.X_test_transform,\n",
" raw_eval_dataset=automl_explainer_setup_obj.X_test_raw)\n",
" raw_eval_dataset=automl_explainer_setup_obj.X_test_raw,\n",
")\n",
"print(raw_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
"print(\n",
" \"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
" + automl_run.get_portal_url()\n",
")"
]
},
{
@@ -524,15 +541,17 @@
"import joblib\n",
"\n",
"# Initialize the ScoringExplainer\n",
"scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map])\n",
"scoring_explainer = TreeScoringExplainer(\n",
" explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]\n",
")\n",
"\n",
"# Pickle scoring explainer locally to './scoring_explainer.pkl'\n",
"scoring_explainer_file_name = 'scoring_explainer.pkl'\n",
"with open(scoring_explainer_file_name, 'wb') as stream:\n",
"scoring_explainer_file_name = \"scoring_explainer.pkl\"\n",
"with open(scoring_explainer_file_name, \"wb\") as stream:\n",
" joblib.dump(scoring_explainer, stream)\n",
"\n",
"# Upload the scoring explainer to the automl run\n",
"automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name)"
"automl_run.upload_file(\"outputs/scoring_explainer.pkl\", scoring_explainer_file_name)"
]
},
{
@@ -551,10 +570,12 @@
"outputs": [],
"source": [
"# Register trained automl model present in the 'outputs' folder in the artifacts\n",
"original_model = automl_run.register_model(model_name='automl_model', \n",
" model_path='outputs/model.pkl')\n",
"scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer',\n",
" model_path='outputs/scoring_explainer.pkl')"
"original_model = automl_run.register_model(\n",
" model_name=\"automl_model\", model_path=\"outputs/model.pkl\"\n",
")\n",
"scoring_explainer_model = automl_run.register_model(\n",
" model_name=\"scoring_explainer\", model_path=\"outputs/scoring_explainer.pkl\"\n",
")"
]
},
{
@@ -575,7 +596,7 @@
"from azureml.automl.core.shared import constants\n",
"from azureml.core.environment import Environment\n",
"\n",
"automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml')\n",
"automl_run.download_file(constants.CONDA_ENV_FILE_PATH, \"myenv.yml\")\n",
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
"myenv"
]
@@ -598,7 +619,9 @@
"import joblib\n",
"import pandas as pd\n",
"from azureml.core.model import Model\n",
"from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations\n",
"from azureml.train.automl.runtime.automl_explain_utilities import (\n",
" automl_setup_model_explanations,\n",
")\n",
"\n",
"\n",
"def init():\n",
@@ -607,28 +630,35 @@
"\n",
" # Retrieve the path to the model file using the model name\n",
" # Assume original model is named original_prediction_model\n",
" automl_model_path = Model.get_model_path('automl_model')\n",
" scoring_explainer_path = Model.get_model_path('scoring_explainer')\n",
" automl_model_path = Model.get_model_path(\"automl_model\")\n",
" scoring_explainer_path = Model.get_model_path(\"scoring_explainer\")\n",
"\n",
" automl_model = joblib.load(automl_model_path)\n",
" scoring_explainer = joblib.load(scoring_explainer_path)\n",
"\n",
"\n",
"def run(raw_data):\n",
" data = pd.read_json(raw_data, orient='records') \n",
" data = pd.read_json(raw_data, orient=\"records\")\n",
" # Make prediction\n",
" predictions = automl_model.predict(data)\n",
" # Setup for inferencing explanations\n",
" automl_explainer_setup_obj = automl_setup_model_explanations(automl_model,\n",
" X_test=data, task='classification')\n",
" automl_explainer_setup_obj = automl_setup_model_explanations(\n",
" automl_model, X_test=data, task=\"classification\"\n",
" )\n",
" # Retrieve model explanations for engineered explanations\n",
" engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform)\n",
" engineered_local_importance_values = scoring_explainer.explain(\n",
" automl_explainer_setup_obj.X_test_transform\n",
" )\n",
" # Retrieve model explanations for raw explanations\n",
" raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True)\n",
" raw_local_importance_values = scoring_explainer.explain(\n",
" automl_explainer_setup_obj.X_test_transform, get_raw=True\n",
" )\n",
" # You can return any data type as long as it is JSON-serializable\n",
" return {'predictions': predictions.tolist(),\n",
" 'engineered_local_importance_values': engineered_local_importance_values,\n",
" 'raw_local_importance_values': raw_local_importance_values}\n"
" return {\n",
" \"predictions\": predictions.tolist(),\n",
" \"engineered_local_importance_values\": engineered_local_importance_values,\n",
" \"raw_local_importance_values\": raw_local_importance_values,\n",
" }"
]
},
{
@@ -647,7 +677,7 @@
"source": [
"from azureml.core.model import InferenceConfig\n",
"\n",
"inf_config = InferenceConfig(entry_script='score.py', environment=myenv)"
"inf_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
]
},
{
@@ -668,17 +698,17 @@
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your cluster.\n",
"aks_name = 'scoring-explain'\n",
"aks_name = \"scoring-explain\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" aks_target = ComputeTarget(workspace=ws, name=aks_name)\n",
" print('Found existing cluster, use it.')\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2')\n",
" aks_target = ComputeTarget.create(workspace=ws, \n",
" name=aks_name,\n",
" provisioning_configuration=prov_config)\n",
" prov_config = AksCompute.provisioning_configuration(vm_size=\"STANDARD_D3_V2\")\n",
" aks_target = ComputeTarget.create(\n",
" workspace=ws, name=aks_name, provisioning_configuration=prov_config\n",
" )\n",
"aks_target.wait_for_completion(show_output=True)"
]
},
@@ -708,14 +738,16 @@
"metadata": {},
"outputs": [],
"source": [
"aks_service_name ='model-scoring-local-aks'\n",
"aks_service_name = \"model-scoring-local-aks\"\n",
"\n",
"aks_service = Model.deploy(workspace=ws,\n",
"aks_service = Model.deploy(\n",
" workspace=ws,\n",
" name=aks_service_name,\n",
" models=[scoring_explainer_model, original_model],\n",
" inference_config=inf_config,\n",
" deployment_config=aks_config,\n",
" deployment_target=aks_target)\n",
" deployment_target=aks_target,\n",
")\n",
"\n",
"aks_service.wait_for_deployment(show_output=True)\n",
"print(aks_service.state)"
@@ -752,18 +784,24 @@
"outputs": [],
"source": [
"# Serialize the first row of the test data into json\n",
"X_test_json = X_test_df[:1].to_json(orient='records')\n",
"X_test_json = X_test_df[:1].to_json(orient=\"records\")\n",
"print(X_test_json)\n",
"\n",
"# Call the service to get the predictions and the engineered and raw explanations\n",
"output = aks_service.run(X_test_json)\n",
"\n",
"# Print the predicted value\n",
"print('predictions:\\n{}\\n'.format(output['predictions']))\n",
"print(\"predictions:\\n{}\\n\".format(output[\"predictions\"]))\n",
"# Print the engineered feature importances for the predicted value\n",
"print('engineered_local_importance_values:\\n{}\\n'.format(output['engineered_local_importance_values']))\n",
"print(\n",
" \"engineered_local_importance_values:\\n{}\\n\".format(\n",
" output[\"engineered_local_importance_values\"]\n",
" )\n",
")\n",
"# Print the raw feature importances for the predicted value\n",
"print('raw_local_importance_values:\\n{}\\n'.format(output['raw_local_importance_values']))\n"
"print(\n",
" \"raw_local_importance_values:\\n{}\\n\".format(output[\"raw_local_importance_values\"])\n",
")"
]
},
{

View File

@@ -1,21 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/regression-car-price-model-explaination-and-featurization/auto-ml-regression.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -78,6 +62,7 @@
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"\n",
"from azureml.automl.core.featurization import FeaturizationConfig\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.core.dataset import Dataset"
@@ -90,16 +75,6 @@
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -109,17 +84,18 @@
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment.\n",
"experiment_name = 'automl-regression-hardware-explain'\n",
"experiment_name = \"automl-regression-hardware-explain\"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace Name'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Experiment Name'] = experiment.name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace Name\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Experiment Name\"] = experiment.name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
@@ -152,12 +128,12 @@
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n",
" max_nodes=4)\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n",
" )\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
]
},
@@ -176,7 +152,7 @@
"metadata": {},
"outputs": [],
"source": [
"data = 'https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/machineData.csv'\n",
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/machineData.csv\"\n",
"\n",
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
"\n",
@@ -185,12 +161,20 @@
"\n",
"\n",
"# Register the train dataset with your workspace\n",
"train_data.register(workspace = ws, name = 'machineData_train_dataset',\n",
" description = 'hardware performance training data',\n",
" create_new_version=True)\n",
"train_data.register(\n",
" workspace=ws,\n",
" name=\"machineData_train_dataset\",\n",
" description=\"hardware performance training data\",\n",
" create_new_version=True,\n",
")\n",
"\n",
"# Register the test dataset with your workspace\n",
"test_data.register(workspace = ws, name = 'machineData_test_dataset', description = 'hardware performance test data', create_new_version=True)\n",
"test_data.register(\n",
" workspace=ws,\n",
" name=\"machineData_test_dataset\",\n",
" description=\"hardware performance test data\",\n",
" create_new_version=True,\n",
")\n",
"\n",
"label = \"ERP\"\n",
"\n",
@@ -249,14 +233,18 @@
"outputs": [],
"source": [
"featurization_config = FeaturizationConfig()\n",
"featurization_config.blocked_transformers = ['LabelEncoder']\n",
"featurization_config.blocked_transformers = [\"LabelEncoder\"]\n",
"# featurization_config.drop_columns = ['MMIN']\n",
"featurization_config.add_column_purpose('MYCT', 'Numeric')\n",
"featurization_config.add_column_purpose('VendorName', 'CategoricalHash')\n",
"featurization_config.add_column_purpose(\"MYCT\", \"Numeric\")\n",
"featurization_config.add_column_purpose(\"VendorName\", \"CategoricalHash\")\n",
"# default strategy mean, add transformer param for for 3 columns\n",
"featurization_config.add_transformer_params('Imputer', ['CACH'], {\"strategy\": \"median\"})\n",
"featurization_config.add_transformer_params('Imputer', ['CHMIN'], {\"strategy\": \"median\"})\n",
"featurization_config.add_transformer_params('Imputer', ['PRP'], {\"strategy\": \"most_frequent\"})\n",
"featurization_config.add_transformer_params(\"Imputer\", [\"CACH\"], {\"strategy\": \"median\"})\n",
"featurization_config.add_transformer_params(\n",
" \"Imputer\", [\"CHMIN\"], {\"strategy\": \"median\"}\n",
")\n",
"featurization_config.add_transformer_params(\n",
" \"Imputer\", [\"PRP\"], {\"strategy\": \"most_frequent\"}\n",
")\n",
"# featurization_config.add_transformer_params('HashOneHotEncoder', [], {\"number_of_bits\": 3})"
]
},
@@ -276,17 +264,18 @@
" \"max_concurrent_iterations\": 4,\n",
" \"max_cores_per_iteration\": -1,\n",
" \"n_cross_validations\": 5,\n",
" \"primary_metric\": 'normalized_root_mean_squared_error',\n",
" \"verbosity\": logging.INFO\n",
" \"primary_metric\": \"normalized_root_mean_squared_error\",\n",
" \"verbosity\": logging.INFO,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'regression',\n",
" debug_log = 'automl_errors.log',\n",
"automl_config = AutoMLConfig(\n",
" task=\"regression\",\n",
" debug_log=\"automl_errors.log\",\n",
" compute_target=compute_target,\n",
" featurization=featurization_config,\n",
" training_data=train_data,\n",
" label_column_name=label,\n",
" **automl_settings\n",
" **automl_settings,\n",
")"
]
},
@@ -359,8 +348,10 @@
"metadata": {},
"outputs": [],
"source": [
"# Download the featuurization summary JSON file locally\n",
"best_run.download_file(\"outputs/featurization_summary.json\", \"featurization_summary.json\")\n",
"# Download the featurization summary JSON file locally\n",
"best_run.download_file(\n",
" \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n",
")\n",
"\n",
"# Render the JSON as a pandas DataFrame\n",
"with open(\"featurization_summary.json\", \"r\") as f:\n",
@@ -394,6 +385,7 @@
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(remote_run).show()"
]
},
@@ -441,7 +433,7 @@
"metadata": {},
"outputs": [],
"source": [
"with open('train_explainer.py', 'r') as cefr:\n",
"with open(\"train_explainer.py\", \"r\") as cefr:\n",
" print(cefr.read())"
]
},
@@ -463,32 +455,36 @@
"import os\n",
"\n",
"# create script folder\n",
"script_folder = './sample_projects/automl-regression-hardware'\n",
"script_folder = \"./sample_projects/automl-regression-hardware\"\n",
"if not os.path.exists(script_folder):\n",
" os.makedirs(script_folder)\n",
"\n",
"# Copy the sample script to script folder.\n",
"shutil.copy('train_explainer.py', script_folder)\n",
"shutil.copy(\"train_explainer.py\", script_folder)\n",
"\n",
"# Create the explainer script that will run on the remote compute.\n",
"script_file_name = script_folder + '/train_explainer.py'\n",
"script_file_name = script_folder + \"/train_explainer.py\"\n",
"\n",
"# Open the sample script for modification\n",
"with open(script_file_name, 'r') as cefr:\n",
"with open(script_file_name, \"r\") as cefr:\n",
" content = cefr.read()\n",
"\n",
"# Replace the values in train_explainer.py file with the appropriate values\n",
"content = content.replace('<<experiment_name>>', automl_run.experiment.name) # your experiment name.\n",
"content = content.replace('<<run_id>>', automl_run.id) # Run-id of the AutoML run for which you want to explain the model.\n",
"content = content.replace('<<target_column_name>>', 'ERP') # Your target column name\n",
"content = content.replace('<<task>>', 'regression') # Training task type\n",
"content = content.replace(\n",
" \"<<experiment_name>>\", automl_run.experiment.name\n",
") # your experiment name.\n",
"content = content.replace(\n",
" \"<<run_id>>\", automl_run.id\n",
") # Run-id of the AutoML run for which you want to explain the model.\n",
"content = content.replace(\"<<target_column_name>>\", \"ERP\") # Your target column name\n",
"content = content.replace(\"<<task>>\", \"regression\") # Training task type\n",
"# Name of your training dataset register with your workspace\n",
"content = content.replace('<<train_dataset_name>>', 'machineData_train_dataset') \n",
"content = content.replace(\"<<train_dataset_name>>\", \"machineData_train_dataset\")\n",
"# Name of your test dataset register with your workspace\n",
"content = content.replace('<<test_dataset_name>>', 'machineData_test_dataset')\n",
"content = content.replace(\"<<test_dataset_name>>\", \"machineData_test_dataset\")\n",
"\n",
"# Write sample file into your script folder.\n",
"with open(script_file_name, 'w') as cefw:\n",
"with open(script_file_name, \"w\") as cefw:\n",
" cefw.write(content)"
]
},
@@ -506,6 +502,8 @@
"outputs": [],
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"import pkg_resources\n",
"\n",
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
@@ -515,7 +513,9 @@
"conda_run_config.environment.docker.enabled = True\n",
"\n",
"# specify CondaDependencies obj\n",
"conda_run_config.environment.python.conda_dependencies = automl_run.get_environment().python.conda_dependencies"
"conda_run_config.environment.python.conda_dependencies = (\n",
" automl_run.get_environment().python.conda_dependencies\n",
")"
]
},
{
@@ -535,9 +535,11 @@
"# Now submit a run on AmlCompute for model explanations\n",
"from azureml.core.script_run_config import ScriptRunConfig\n",
"\n",
"script_run_config = ScriptRunConfig(source_directory=script_folder,\n",
" script='train_explainer.py',\n",
" run_config=conda_run_config)\n",
"script_run_config = ScriptRunConfig(\n",
" source_directory=script_folder,\n",
" script=\"train_explainer.py\",\n",
" run_config=conda_run_config,\n",
")\n",
"\n",
"run = experiment.submit(script_run_config)\n",
"\n",
@@ -579,10 +581,16 @@
"outputs": [],
"source": [
"from azureml.interpret import ExplanationClient\n",
"\n",
"client = ExplanationClient.from_run(automl_run)\n",
"engineered_explanations = client.download_model_explanation(raw=False, comment='engineered explanations')\n",
"engineered_explanations = client.download_model_explanation(\n",
" raw=False, comment=\"engineered explanations\"\n",
")\n",
"print(engineered_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
"print(\n",
" \"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
" + automl_run.get_portal_url()\n",
")"
]
},
{
@@ -599,9 +607,14 @@
"metadata": {},
"outputs": [],
"source": [
"raw_explanations = client.download_model_explanation(raw=True, comment='raw explanations')\n",
"raw_explanations = client.download_model_explanation(\n",
" raw=True, comment=\"raw explanations\"\n",
")\n",
"print(raw_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
"print(\n",
" \"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
" + automl_run.get_portal_url()\n",
")"
]
},
{
@@ -623,10 +636,12 @@
"outputs": [],
"source": [
"# Register trained automl model present in the 'outputs' folder in the artifacts\n",
"original_model = automl_run.register_model(model_name='automl_model', \n",
" model_path='outputs/model.pkl')\n",
"scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer',\n",
" model_path='outputs/scoring_explainer.pkl')"
"original_model = automl_run.register_model(\n",
" model_name=\"automl_model\", model_path=\"outputs/model.pkl\"\n",
")\n",
"scoring_explainer_model = automl_run.register_model(\n",
" model_name=\"scoring_explainer\", model_path=\"outputs/scoring_explainer.pkl\"\n",
")"
]
},
{
@@ -647,7 +662,6 @@
"\n",
"with open(\"myenv.yml\", \"w\") as f:\n",
" f.write(conda_dep.serialize_to_string())\n",
"\n",
"with open(\"myenv.yml\", \"r\") as f:\n",
" print(f.read())"
]
@@ -683,22 +697,30 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.webservice import AciWebservice\n",
"from azureml.core.model import Model\n",
"from azureml.core.environment import Environment\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=2, \n",
"aciconfig = AciWebservice.deploy_configuration(\n",
" cpu_cores=2,\n",
" memory_gb=2,\n",
" tags={\"data\": \"Machine Data\", \n",
" \"method\" : \"local_explanation\"}, \n",
" description='Get local explanations for Machine test data')\n",
" tags={\"data\": \"Machine Data\", \"method\": \"local_explanation\"},\n",
" description=\"Get local explanations for Machine test data\",\n",
")\n",
"\n",
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
"inference_config = InferenceConfig(entry_script=\"score_explain.py\", environment=myenv)\n",
"\n",
"# Use configs and models generated above\n",
"service = Model.deploy(ws, 'model-scoring', [scoring_explainer_model, original_model], inference_config, aciconfig)\n",
"service = Model.deploy(\n",
" ws,\n",
" \"model-scoring\",\n",
" [scoring_explainer_model, original_model],\n",
" inference_config,\n",
" aciconfig,\n",
")\n",
"service.wait_for_deployment(show_output=True)"
]
},
@@ -732,19 +754,19 @@
"metadata": {},
"outputs": [],
"source": [
"if service.state == 'Healthy':\n",
"if service.state == \"Healthy\":\n",
" X_test = test_data.drop_columns([label]).to_pandas_dataframe()\n",
" # Serialize the first row of the test data into json\n",
" X_test_json = X_test[:1].to_json(orient='records')\n",
" X_test_json = X_test[:1].to_json(orient=\"records\")\n",
" print(X_test_json)\n",
" # Call the service to get the predictions and the engineered and raw explanations\n",
" output = service.run(X_test_json)\n",
" # Print the predicted value\n",
" print(output['predictions'])\n",
" print(output[\"predictions\"])\n",
" # Print the engineered feature importances for the predicted value\n",
" print(output['engineered_local_importance_values'])\n",
" print(output[\"engineered_local_importance_values\"])\n",
" # Print the raw feature importances for the predicted value\n",
" print(output['raw_local_importance_values'])"
" print(output[\"raw_local_importance_values\"])"
]
},
{
@@ -780,14 +802,14 @@
"# preview the first 3 rows of the dataset\n",
"\n",
"test_data = test_data.to_pandas_dataframe()\n",
"y_test = test_data['ERP'].fillna(0)\n",
"test_data = test_data.drop('ERP', 1)\n",
"y_test = test_data[\"ERP\"].fillna(0)\n",
"test_data = test_data.drop(\"ERP\", 1)\n",
"test_data = test_data.fillna(0)\n",
"\n",
"\n",
"train_data = train_data.to_pandas_dataframe()\n",
"y_train = train_data['ERP'].fillna(0)\n",
"train_data = train_data.drop('ERP', 1)\n",
"y_train = train_data[\"ERP\"].fillna(0)\n",
"train_data = train_data.drop(\"ERP\", 1)\n",
"train_data = train_data.fillna(0)"
]
},
@@ -814,27 +836,41 @@
"from sklearn.metrics import mean_squared_error, r2_score\n",
"\n",
"# Set up a multi-plot chart.\n",
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})\n",
"f.suptitle('Regression Residual Values', fontsize = 18)\n",
"f, (a0, a1) = plt.subplots(\n",
" 1, 2, gridspec_kw={\"width_ratios\": [1, 1], \"wspace\": 0, \"hspace\": 0}\n",
")\n",
"f.suptitle(\"Regression Residual Values\", fontsize=18)\n",
"f.set_figheight(6)\n",
"f.set_figwidth(16)\n",
"\n",
"# Plot residual values of training set.\n",
"a0.axis([0, 360, -100, 100])\n",
"a0.plot(y_residual_train, 'bo', alpha = 0.5)\n",
"a0.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)\n",
"a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)),fontsize = 12)\n",
"a0.set_xlabel('Training samples', fontsize = 12)\n",
"a0.set_ylabel('Residual Values', fontsize = 12)\n",
"a0.plot(y_residual_train, \"bo\", alpha=0.5)\n",
"a0.plot([-10, 360], [0, 0], \"r-\", lw=3)\n",
"a0.text(\n",
" 16,\n",
" 170,\n",
" \"RMSE = {0:.2f}\".format(np.sqrt(mean_squared_error(y_train, y_pred_train))),\n",
" fontsize=12,\n",
")\n",
"a0.text(\n",
" 16, 140, \"R2 score = {0:.2f}\".format(r2_score(y_train, y_pred_train)), fontsize=12\n",
")\n",
"a0.set_xlabel(\"Training samples\", fontsize=12)\n",
"a0.set_ylabel(\"Residual Values\", fontsize=12)\n",
"\n",
"# Plot residual values of test set.\n",
"a1.axis([0, 90, -100, 100])\n",
"a1.plot(y_residual_test, 'bo', alpha = 0.5)\n",
"a1.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)\n",
"a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)),fontsize = 12)\n",
"a1.set_xlabel('Test samples', fontsize = 12)\n",
"a1.plot(y_residual_test, \"bo\", alpha=0.5)\n",
"a1.plot([-10, 360], [0, 0], \"r-\", lw=3)\n",
"a1.text(\n",
" 5,\n",
" 170,\n",
" \"RMSE = {0:.2f}\".format(np.sqrt(mean_squared_error(y_test, y_pred_test))),\n",
" fontsize=12,\n",
")\n",
"a1.text(5, 140, \"R2 score = {0:.2f}\".format(r2_score(y_test, y_pred_test)), fontsize=12)\n",
"a1.set_xlabel(\"Test samples\", fontsize=12)\n",
"a1.set_yticklabels([])\n",
"\n",
"plt.show()"
@@ -847,9 +883,11 @@
"outputs": [],
"source": [
"%matplotlib inline\n",
"test_pred = plt.scatter(y_test, y_pred_test, color='')\n",
"test_test = plt.scatter(y_test, y_test, color='g')\n",
"plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"test_pred = plt.scatter(y_test, y_pred_test, color=\"\")\n",
"test_test = plt.scatter(y_test, y_test, color=\"g\")\n",
"plt.legend(\n",
" (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n",
")\n",
"plt.show()"
]
}

View File

@@ -1,7 +1,10 @@
import pandas as pd
import joblib
from azureml.core.model import Model
from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations
from azureml.train.automl.runtime.automl_explain_utilities import (
automl_setup_model_explanations,
)
import scipy as sp
def init():
@@ -11,26 +14,55 @@ def init():
# Retrieve the path to the model file using the model name
# Assume original model is named original_prediction_model
automl_model_path = Model.get_model_path('automl_model')
scoring_explainer_path = Model.get_model_path('scoring_explainer')
automl_model_path = Model.get_model_path("automl_model")
scoring_explainer_path = Model.get_model_path("scoring_explainer")
automl_model = joblib.load(automl_model_path)
scoring_explainer = joblib.load(scoring_explainer_path)
def is_multi_dimensional(matrix):
if hasattr(matrix, "ndim") and matrix.ndim > 1:
return True
if hasattr(matrix, "shape") and matrix.shape[1]:
return True
return False
def convert_matrix(matrix):
if sp.sparse.issparse(matrix):
matrix = matrix.todense()
if is_multi_dimensional(matrix):
matrix = matrix.tolist()
return matrix
def run(raw_data):
# Get predictions and explanations for each data point
data = pd.read_json(raw_data, orient='records')
data = pd.read_json(raw_data, orient="records")
# Make prediction
predictions = automl_model.predict(data)
# Setup for inferencing explanations
automl_explainer_setup_obj = automl_setup_model_explanations(automl_model,
X_test=data, task='regression')
automl_explainer_setup_obj = automl_setup_model_explanations(
automl_model, X_test=data, task="regression"
)
# Retrieve model explanations for engineered explanations
engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform)
engineered_local_importance_values = scoring_explainer.explain(
automl_explainer_setup_obj.X_test_transform
)
engineered_local_importance_values = convert_matrix(
engineered_local_importance_values
)
# Retrieve model explanations for raw explanations
raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True)
raw_local_importance_values = scoring_explainer.explain(
automl_explainer_setup_obj.X_test_transform, get_raw=True
)
raw_local_importance_values = convert_matrix(raw_local_importance_values)
# You can return any data type as long as it is JSON-serializable
return {'predictions': predictions.tolist(),
'engineered_local_importance_values': engineered_local_importance_values,
'raw_local_importance_values': raw_local_importance_values}
return {
"predictions": predictions.tolist(),
"engineered_local_importance_values": engineered_local_importance_values,
"raw_local_importance_values": raw_local_importance_values,
}

View File

@@ -10,11 +10,13 @@ from azureml.core.dataset import Dataset
from azureml.core.run import Run
from azureml.interpret.mimic_wrapper import MimicWrapper
from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer
from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations, \
automl_check_model_if_explainable
from azureml.train.automl.runtime.automl_explain_utilities import (
automl_setup_model_explanations,
automl_check_model_if_explainable,
)
OUTPUT_DIR = './outputs/'
OUTPUT_DIR = "./outputs/"
os.makedirs(OUTPUT_DIR, exist_ok=True)
# Get workspace from the run context
@@ -22,63 +24,77 @@ run = Run.get_context()
ws = run.experiment.workspace
# Get the AutoML run object from the experiment name and the workspace
experiment = Experiment(ws, '<<experiment_name>>')
automl_run = Run(experiment=experiment, run_id='<<run_id>>')
experiment = Experiment(ws, "<<experiment_name>>")
automl_run = Run(experiment=experiment, run_id="<<run_id>>")
# Check if this AutoML model is explainable
if not automl_check_model_if_explainable(automl_run):
raise Exception("Model explanations are currently not supported for " + automl_run.get_properties().get(
'run_algorithm'))
raise Exception(
"Model explanations are currently not supported for "
+ automl_run.get_properties().get("run_algorithm")
)
# Download the best model from the artifact store
automl_run.download_file(name=MODEL_PATH, output_file_path='model.pkl')
automl_run.download_file(name=MODEL_PATH, output_file_path="model.pkl")
# Load the AutoML model into memory
fitted_model = joblib.load('model.pkl')
fitted_model = joblib.load("model.pkl")
# Get the train dataset from the workspace
train_dataset = Dataset.get_by_name(workspace=ws, name='<<train_dataset_name>>')
train_dataset = Dataset.get_by_name(workspace=ws, name="<<train_dataset_name>>")
# Drop the labeled column to get the training set.
X_train = train_dataset.drop_columns(columns=['<<target_column_name>>'])
y_train = train_dataset.keep_columns(columns=['<<target_column_name>>'], validate=True)
X_train = train_dataset.drop_columns(columns=["<<target_column_name>>"])
y_train = train_dataset.keep_columns(columns=["<<target_column_name>>"], validate=True)
# Get the test dataset from the workspace
test_dataset = Dataset.get_by_name(workspace=ws, name='<<test_dataset_name>>')
test_dataset = Dataset.get_by_name(workspace=ws, name="<<test_dataset_name>>")
# Drop the labeled column to get the testing set.
X_test = test_dataset.drop_columns(columns=['<<target_column_name>>'])
X_test = test_dataset.drop_columns(columns=["<<target_column_name>>"])
# Setup the class for explaining the AutoML models
automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, '<<task>>',
X=X_train, X_test=X_test,
y=y_train,
automl_run=automl_run)
automl_explainer_setup_obj = automl_setup_model_explanations(
fitted_model, "<<task>>", X=X_train, X_test=X_test, y=y_train, automl_run=automl_run
)
# Initialize the Mimic Explainer
explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel,
explainer = MimicWrapper(
ws,
automl_explainer_setup_obj.automl_estimator,
LGBMExplainableModel,
init_dataset=automl_explainer_setup_obj.X_transform,
run=automl_explainer_setup_obj.automl_run,
features=automl_explainer_setup_obj.engineered_feature_names,
feature_maps=[automl_explainer_setup_obj.feature_map],
classes=automl_explainer_setup_obj.classes)
classes=automl_explainer_setup_obj.classes,
)
# Compute the engineered explanations
engineered_explanations = explainer.explain(['local', 'global'], tag='engineered explanations',
eval_dataset=automl_explainer_setup_obj.X_test_transform)
engineered_explanations = explainer.explain(
["local", "global"],
tag="engineered explanations",
eval_dataset=automl_explainer_setup_obj.X_test_transform,
)
# Compute the raw explanations
raw_explanations = explainer.explain(['local', 'global'], get_raw=True, tag='raw explanations',
raw_explanations = explainer.explain(
["local", "global"],
get_raw=True,
tag="raw explanations",
raw_feature_names=automl_explainer_setup_obj.raw_feature_names,
eval_dataset=automl_explainer_setup_obj.X_test_transform,
raw_eval_dataset=automl_explainer_setup_obj.X_test_raw)
raw_eval_dataset=automl_explainer_setup_obj.X_test_raw,
)
print("Engineered and raw explanations computed successfully")
# Initialize the ScoringExplainer
scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map])
scoring_explainer = TreeScoringExplainer(
explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]
)
# Pickle scoring explainer locally
with open('scoring_explainer.pkl', 'wb') as stream:
with open("scoring_explainer.pkl", "wb") as stream:
joblib.dump(scoring_explainer, stream)
# Upload the scoring explainer to the automl run
automl_run.upload_file('outputs/scoring_explainer.pkl', 'scoring_explainer.pkl')
automl_run.upload_file("outputs/scoring_explainer.pkl", "scoring_explainer.pkl")

View File

@@ -1,21 +1,5 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -86,16 +70,6 @@
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -105,18 +79,19 @@
"ws = Workspace.from_config()\n",
"\n",
"# Choose a name for the experiment.\n",
"experiment_name = 'automl-regression'\n",
"experiment_name = \"automl-regression\"\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Run History Name'] = experiment_name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"outputDf = pd.DataFrame(data = output, index = [''])\n",
"output[\"Subscription ID\"] = ws.subscription_id\n",
"output[\"Workspace\"] = ws.name\n",
"output[\"Resource Group\"] = ws.resource_group\n",
"output[\"Location\"] = ws.location\n",
"output[\"Run History Name\"] = experiment_name\n",
"output[\"SDK Version\"] = azureml.core.VERSION\n",
"pd.set_option(\"display.max_colwidth\", None)\n",
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
"outputDf.T"
]
},
@@ -143,10 +118,11 @@
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
" print(\"Found existing cluster, use it.\")\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n",
" max_nodes=4)\n",
" compute_config = AmlCompute.provisioning_configuration(\n",
" vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n",
" )\n",
" compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"compute_target.wait_for_completion(show_output=True)"
@@ -179,7 +155,7 @@
"# Split the dataset into train and test datasets\n",
"train_data, test_data = dataset.random_split(percentage=0.8, seed=223)\n",
"\n",
"label = \"ERP\"\n"
"label = \"ERP\""
]
},
{
@@ -213,7 +189,7 @@
"source": [
"automl_settings = {\n",
" \"n_cross_validations\": 3,\n",
" \"primary_metric\": 'normalized_root_mean_squared_error',\n",
" \"primary_metric\": \"r2_score\",\n",
" \"enable_early_stopping\": True,\n",
" \"experiment_timeout_hours\": 0.3, # for real scenarios we reccommend a timeout of at least one hour\n",
" \"max_concurrent_iterations\": 4,\n",
@@ -221,11 +197,12 @@
" \"verbosity\": logging.INFO,\n",
"}\n",
"\n",
"automl_config = AutoMLConfig(task = 'regression',\n",
"automl_config = AutoMLConfig(\n",
" task=\"regression\",\n",
" compute_target=compute_target,\n",
" training_data=train_data,\n",
" label_column_name=label,\n",
" **automl_settings\n",
" **automl_settings,\n",
")"
]
},
@@ -281,6 +258,7 @@
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(remote_run).show()"
]
},
@@ -366,12 +344,12 @@
"metadata": {},
"outputs": [],
"source": [
"y_test = test_data.keep_columns('ERP').to_pandas_dataframe()\n",
"test_data = test_data.drop_columns('ERP').to_pandas_dataframe()\n",
"y_test = test_data.keep_columns(\"ERP\").to_pandas_dataframe()\n",
"test_data = test_data.drop_columns(\"ERP\").to_pandas_dataframe()\n",
"\n",
"\n",
"y_train = train_data.keep_columns('ERP').to_pandas_dataframe()\n",
"train_data = train_data.drop_columns('ERP').to_pandas_dataframe()\n"
"y_train = train_data.keep_columns(\"ERP\").to_pandas_dataframe()\n",
"train_data = train_data.drop_columns(\"ERP\").to_pandas_dataframe()"
]
},
{
@@ -397,27 +375,41 @@
"from sklearn.metrics import mean_squared_error, r2_score\n",
"\n",
"# Set up a multi-plot chart.\n",
"f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})\n",
"f.suptitle('Regression Residual Values', fontsize = 18)\n",
"f, (a0, a1) = plt.subplots(\n",
" 1, 2, gridspec_kw={\"width_ratios\": [1, 1], \"wspace\": 0, \"hspace\": 0}\n",
")\n",
"f.suptitle(\"Regression Residual Values\", fontsize=18)\n",
"f.set_figheight(6)\n",
"f.set_figwidth(16)\n",
"\n",
"# Plot residual values of training set.\n",
"a0.axis([0, 360, -100, 100])\n",
"a0.plot(y_residual_train, 'bo', alpha = 0.5)\n",
"a0.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)\n",
"a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)),fontsize = 12)\n",
"a0.set_xlabel('Training samples', fontsize = 12)\n",
"a0.set_ylabel('Residual Values', fontsize = 12)\n",
"a0.plot(y_residual_train, \"bo\", alpha=0.5)\n",
"a0.plot([-10, 360], [0, 0], \"r-\", lw=3)\n",
"a0.text(\n",
" 16,\n",
" 170,\n",
" \"RMSE = {0:.2f}\".format(np.sqrt(mean_squared_error(y_train, y_pred_train))),\n",
" fontsize=12,\n",
")\n",
"a0.text(\n",
" 16, 140, \"R2 score = {0:.2f}\".format(r2_score(y_train, y_pred_train)), fontsize=12\n",
")\n",
"a0.set_xlabel(\"Training samples\", fontsize=12)\n",
"a0.set_ylabel(\"Residual Values\", fontsize=12)\n",
"\n",
"# Plot residual values of test set.\n",
"a1.axis([0, 90, -100, 100])\n",
"a1.plot(y_residual_test, 'bo', alpha = 0.5)\n",
"a1.plot([-10,360],[0,0], 'r-', lw = 3)\n",
"a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)\n",
"a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)),fontsize = 12)\n",
"a1.set_xlabel('Test samples', fontsize = 12)\n",
"a1.plot(y_residual_test, \"bo\", alpha=0.5)\n",
"a1.plot([-10, 360], [0, 0], \"r-\", lw=3)\n",
"a1.text(\n",
" 5,\n",
" 170,\n",
" \"RMSE = {0:.2f}\".format(np.sqrt(mean_squared_error(y_test, y_pred_test))),\n",
" fontsize=12,\n",
")\n",
"a1.text(5, 140, \"R2 score = {0:.2f}\".format(r2_score(y_test, y_pred_test)), fontsize=12)\n",
"a1.set_xlabel(\"Test samples\", fontsize=12)\n",
"a1.set_yticklabels([])\n",
"\n",
"plt.show()"
@@ -430,9 +422,11 @@
"outputs": [],
"source": [
"%matplotlib inline\n",
"test_pred = plt.scatter(y_test, y_pred_test, color='')\n",
"test_test = plt.scatter(y_test, y_test, color='g')\n",
"plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"test_pred = plt.scatter(y_test, y_pred_test, color=\"\")\n",
"test_test = plt.scatter(y_test, y_test, color=\"g\")\n",
"plt.legend(\n",
" (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n",
")\n",
"plt.show()"
]
},

View File

@@ -82,7 +82,7 @@
"source": [
"## Create trained model\n",
"\n",
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset). "
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html). "
]
},
{
@@ -279,7 +279,9 @@
"\n",
"\n",
"environment = Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
"environment.python.conda_dependencies = CondaDependencies.create(conda_packages=[\n",
" 'pip==20.2.4'],\n",
" pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'joblib',\n",
@@ -478,7 +480,9 @@
"\n",
"\n",
"environment = Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
"environment.python.conda_dependencies = CondaDependencies.create(conda_packages=[\n",
" 'pip==20.2.4'],\n",
" pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'joblib',\n",

View File

@@ -81,7 +81,7 @@
"source": [
"## Create trained model\n",
"\n",
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset). "
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset). "
]
},
{
@@ -263,7 +263,7 @@
"\n",
"# explicitly set base_image to None when setting base_dockerfile\n",
"myenv.docker.base_image = None\n",
"myenv.docker.base_dockerfile = \"FROM mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04\\nRUN echo \\\"this is test\\\"\"\n",
"myenv.docker.base_dockerfile = \"FROM mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04\\nRUN echo \\\"this is test\\\"\"\n",
"myenv.inferencing_stack_version = \"latest\"\n",
"\n",
"inference_config = InferenceConfig(source_directory=source_directory,\n",

View File

@@ -105,7 +105,9 @@
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"environment=Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
"environment.python.conda_dependencies = CondaDependencies.create(conda_packages=[\n",
" 'pip==20.2.4'],\n",
" pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'numpy',\n",

View File

@@ -70,7 +70,7 @@
"\n",
"import urllib.request\n",
"\n",
"onnx_model_url = \"https://github.com/onnx/models/blob/master/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.tar.gz?raw=true\"\n",
"onnx_model_url = \"https://github.com/onnx/models/blob/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.tar.gz?raw=true\"\n",
"\n",
"urllib.request.urlretrieve(onnx_model_url, filename=\"emotion-ferplus-7.tar.gz\")\n",
"\n",

View File

@@ -70,7 +70,7 @@
"\n",
"import urllib.request\n",
"\n",
"onnx_model_url = \"https://github.com/onnx/models/blob/master/vision/classification/mnist/model/mnist-7.tar.gz?raw=true\"\n",
"onnx_model_url = \"https://github.com/onnx/models/blob/main/vision/classification/mnist/model/mnist-7.tar.gz?raw=true\"\n",
"\n",
"urllib.request.urlretrieve(onnx_model_url, filename=\"mnist-7.tar.gz\")"
]

View File

@@ -106,7 +106,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.41.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -249,6 +249,7 @@
"source": [
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"import sys\n",
"\n",
"# Create a new RunConfig object\n",
"run_config = RunConfiguration(framework=\"python\")\n",
@@ -260,6 +261,8 @@
" 'azureml-defaults', 'azureml-telemetry', 'azureml-interpret'\n",
"]\n",
"\n",
"python_version = '{0}.{1}'.format(sys.version_info[0], sys.version_info[1])\n",
"\n",
"# Note: this is to pin the scikit-learn and pandas versions to be same as notebook.\n",
"# In production scenario user would choose their dependencies\n",
"import pkg_resources\n",
@@ -283,7 +286,7 @@
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
"# cause errors. Please take extra care when specifying your dependencies in a production environment.\n",
"azureml_pip_packages.extend([sklearn_dep, pandas_dep])\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(pip_packages=azureml_pip_packages)\n",
"run_config.environment.python.conda_dependencies = CondaDependencies.create(pip_packages=azureml_pip_packages, python_version=python_version)\n",
"\n",
"from azureml.core import ScriptRunConfig\n",
"\n",

View File

@@ -11,4 +11,6 @@ dependencies:
- matplotlib
- azureml-dataset-runtime
- ipywidgets
- raiwidgets~=0.15.0
- raiwidgets~=0.17.0
- itsdangerous==2.0.1
- markupsafe<2.1.0

View File

@@ -10,4 +10,7 @@ dependencies:
- ipython
- matplotlib
- ipywidgets
- raiwidgets~=0.15.0
- raiwidgets~=0.17.0
- packaging>=20.9
- itsdangerous==2.0.1
- markupsafe<2.1.0

View File

@@ -358,6 +358,7 @@
"# cause errors. Please take extra care when specifying your dependencies in a production environment.\n",
"myenv = CondaDependencies.create(\n",
" python_version=python_version,\n",
" conda_packages=['pip==20.2.4'],\n",
" pip_packages=['pyyaml', sklearn_dep, pandas_dep] + azureml_pip_packages)\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
@@ -391,7 +392,7 @@
"\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, \n",
" memory_gb=1, \n",
" memory_gb=2, \n",
" tags={\"data\": \"IBM_Attrition\", \n",
" \"method\" : \"local_explanation\"}, \n",
" description='Get local explanations for IBM Employee Attrition data')\n",
@@ -415,8 +416,8 @@
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"import json\n",
"from raiutils.webservice import post_with_retries\n",
"\n",
"\n",
"# Create data to test service with\n",
@@ -428,7 +429,7 @@
"\n",
"# Send request to service\n",
"print(\"POST to url\", service.scoring_uri)\n",
"resp = requests.post(service.scoring_uri, sample_data, headers=headers)\n",
"resp = post_with_retries(service.scoring_uri, sample_data, headers)\n",
"\n",
"# Can covert back to Python objects from json string if desired\n",
"print(\"prediction:\", resp.text)\n",

View File

@@ -10,4 +10,8 @@ dependencies:
- ipython
- matplotlib
- ipywidgets
- raiwidgets~=0.15.0
- raiwidgets~=0.17.0
- packaging>=20.9
- itsdangerous==2.0.1
- markupsafe<2.1.0
- raiutils

View File

@@ -513,7 +513,7 @@
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"from raiutils.webservice import post_with_retries\n",
"\n",
"# Create data to test service with\n",
"examples = x_test[:4]\n",
@@ -523,7 +523,7 @@
"\n",
"# Send request to service\n",
"print(\"POST to url\", service.scoring_uri)\n",
"resp = requests.post(service.scoring_uri, input_data, headers=headers)\n",
"resp = post_with_retries(service.scoring_uri, input_data, headers)\n",
"\n",
"# Can covert back to Python objects from json string if desired\n",
"print(\"prediction:\", resp.text)"

View File

@@ -12,4 +12,7 @@ dependencies:
- azureml-dataset-runtime
- azureml-core
- ipywidgets
- raiwidgets~=0.15.0
- raiwidgets~=0.17.0
- itsdangerous==2.0.1
- markupsafe<2.1.0
- raiutils

View File

@@ -5,17 +5,6 @@ import argparse
import os
from azureml.core import Run
def get_dict(dict_str):
pairs = dict_str.strip("{}").split(r'\;')
new_dict = {}
for pair in pairs:
key, value = pair.strip().split(":")
new_dict[key.strip().strip("'")] = value.strip().strip("'")
return new_dict
print("Cleans the input data")
# Get the input green_taxi_data. To learn more about how to access dataset in your script, please
@@ -23,7 +12,6 @@ print("Cleans the input data")
run = Run.get_context()
raw_data = run.input_datasets["raw_data"]
parser = argparse.ArgumentParser("cleanse")
parser.add_argument("--output_cleanse", type=str, help="cleaned taxi data directory")
parser.add_argument("--useful_columns", type=str, help="useful columns to keep")
@@ -38,8 +26,8 @@ print("Argument 3(output cleansed taxi data path): %s" % args.output_cleanse)
# These functions ensure that null data is removed from the dataset,
# which will help increase machine learning model accuracy.
useful_columns = [s.strip().strip("'") for s in args.useful_columns.strip("[]").split(r'\;')]
columns = get_dict(args.columns)
useful_columns = eval(args.useful_columns.replace(';', ','))
columns = eval(args.columns.replace(';', ','))
new_df = (raw_data.to_pandas_dataframe()
.dropna(how='all')

View File

@@ -254,6 +254,7 @@
"- conda-forge\n",
"dependencies:\n",
"- python=3.6.2\n",
"- pip=21.3.1\n",
"- pip:\n",
" - azureml-defaults\n",
" - azureml-opendatasets\n",
@@ -587,7 +588,7 @@
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,\n",
" auth_enabled=True, # this flag generates API keys to secure access\n",
" memory_gb=1,\n",
" memory_gb=2,\n",
" tags={'name': 'mnist', 'framework': 'Chainer'},\n",
" description='Chainer DNN with MNIST')\n",
"\n",

View File

@@ -163,7 +163,7 @@
"metadata": {},
"outputs": [],
"source": [
"fastai_env.docker.base_image = \"fastdotai/fastai:latest\"\n",
"fastai_env.docker.base_image = \"fastdotai/fastai:2021-02-11\"\n",
"fastai_env.python.user_managed_dependencies = True"
]
},
@@ -199,7 +199,7 @@
"Specify docker steps as a string:\n",
"```python \n",
"dockerfile = r\"\"\" \\\n",
"FROM mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04\n",
"FROM mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04\n",
"RUN echo \"Hello from custom container!\" \\\n",
"\"\"\"\n",
"```\n",

View File

@@ -431,6 +431,7 @@
"- conda-forge\n",
"dependencies:\n",
"- python=3.6.2\n",
"- pip=21.3.1\n",
"- pip:\n",
" - h5py<=2.10.0\n",
" - azureml-defaults\n",

View File

@@ -262,6 +262,7 @@
"- conda-forge\n",
"dependencies:\n",
"- python=3.6.2\n",
"- pip=21.3.1\n",
"- pip:\n",
" - azureml-defaults\n",
" - torch==1.6.0\n",

View File

@@ -6,5 +6,5 @@ dependencies:
- pillow==5.4.1
- matplotlib
- numpy==1.19.3
- https://download.pytorch.org/whl/cpu/torch-1.6.0%2Bcpu-cp36-cp36m-win_amd64.whl
- https://download.pytorch.org/whl/cpu/torchvision-0.7.0%2Bcpu-cp36-cp36m-win_amd64.whl
- https://download.pytorch.org/whl/cpu/torch-1.6.0%2Bcpu-cp38-cp38-win_amd64.whl
- https://download.pytorch.org/whl/cpu/torchvision-0.7.0%2Bcpu-cp38-cp38-win_amd64.whl

View File

@@ -103,15 +103,14 @@ device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
# Use Azure Open Datasets for MNIST dataset
datasets.MNIST.mirrors = [
"https://azureopendatastorage.azurefd.net/mnist/"
]
datasets.MNIST.resources = [
("https://azureopendatastorage.azurefd.net/mnist/train-images-idx3-ubyte.gz",
"f68b3c2dcbeaaa9fbdd348bbdeb94873"),
("https://azureopendatastorage.azurefd.net/mnist/train-labels-idx1-ubyte.gz",
"d53e105ee54ea40749a09fcbcd1e9432"),
("https://azureopendatastorage.azurefd.net/mnist/t10k-images-idx3-ubyte.gz",
"9fb629c4189551a2d022fa330f9573f3"),
("https://azureopendatastorage.azurefd.net/mnist/t10k-labels-idx1-ubyte.gz",
"ec29112dd5afa0611ce80d1b7f02629c")
("train-images-idx3-ubyte.gz", "f68b3c2dcbeaaa9fbdd348bbdeb94873"),
("train-labels-idx1-ubyte.gz", "d53e105ee54ea40749a09fcbcd1e9432"),
("t10k-images-idx3-ubyte.gz", "9fb629c4189551a2d022fa330f9573f3"),
("t10k-labels-idx1-ubyte.gz", "ec29112dd5afa0611ce80d1b7f02629c")
]
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,

View File

@@ -1,4 +1,7 @@
# Important Note
Azure Machine Learning reinforcement learning via the `azureml.contrib.train.rl` package that is used on this page will no longer be supported after June 2022. We recommend customers use Ray-on-AML library to facilitate execution of reinforcement learning experiments on Azure Machine Learning. The sample notebooks referenced in [this section](#contents) are updated accordingly to use Ray on AML library.
# Azure Machine Learning - Reinforcement Learning (Public Preview)
<!--

View File

@@ -0,0 +1,16 @@
FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04
RUN pip install ray-on-aml==0.1.6
RUN pip install gym[atari]==0.19.0
RUN pip install gym[accept-rom-license]==0.19.0
RUN pip install ale-py==0.7.0
RUN pip install azureml-core
RUN pip install ray==0.8.7
RUN pip install ray[rllib,tune,serve]==0.8.7
RUN pip install tensorflow==1.14.0
USER root
RUN apt-get update
RUN apt-get install -y jq
RUN apt-get install -y rsync

View File

@@ -0,0 +1,72 @@
FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20211111.v1
# CUDA repository key rotation: https://forums.developer.nvidia.com/t/notice-cuda-linux-repository-key-rotation/212771
RUN apt-key del 7fa2af80
ENV distro ubuntu1804
ENV arch x86_64
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/3bf863cc.pub
RUN apt-get update && apt-get install -y --no-install-recommends \
python-opengl \
rsync \
xvfb && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /usr/share/man/*
ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/tensorflow-2.4
# Create conda environment
RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
python=3.7 pip=20.2.4
# Prepend path to AzureML conda environment
ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
RUN pip --version
RUN python --version
# Install ray-on-aml
RUN pip install 'ray-on-aml==0.1.6'
RUN pip install ray==0.8.7
RUN pip install gym[atari]==0.19.0
RUN pip install gym[accept-rom-license]==0.19.0
# Install pip dependencies
RUN HOROVOD_WITH_TENSORFLOW=1 \
pip install 'matplotlib>=3.3,<3.4' \
'psutil>=5.8,<5.9' \
'tqdm>=4.59,<4.60' \
'pandas>=1.1,<1.2' \
'scipy>=1.5,<1.6' \
'numpy>=1.10,<1.20' \
'ipykernel~=6.0' \
'azureml-core==1.36.0.post2' \
'azureml-defaults==1.36.0' \
'azureml-mlflow==1.36.0' \
'azureml-telemetry==1.36.0' \
'tensorboard==2.4.0' \
'tensorflow-gpu==2.4.1' \
'tensorflow-datasets==4.3.0' \
'onnxruntime-gpu>=1.7,<1.8' \
'horovod[tensorflow-gpu]==0.21.3'
RUN pip install --no-cache-dir \
azureml-defaults \
azureml-dataset-runtime[fuse,pandas] \
azureml-contrib-reinforcementlearning \
gputil \
cloudpickle==1.3.0 \
tabulate \
dm_tree \
lz4 \
psutil \
setproctitle
# This is required for ray 0.8.7
RUN pip install -U aiohttp==3.7.4
# This is needed for mpi to locate libpython
ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH

View File

@@ -1,28 +1,21 @@
import ray
from ray_on_aml.core import Ray_On_AML
import ray.tune as tune
from ray.rllib import train
import os
import sys
from azureml.core import Run
from utils import callbacks
DEFAULT_RAY_ADDRESS = 'localhost:6379'
if __name__ == "__main__":
ray_on_aml = Ray_On_AML()
ray = ray_on_aml.getRay()
if ray: # in the headnode
# Parse arguments
train_parser = train.create_parser()
args = train_parser.parse_args()
print("Algorithm config:", args.config)
if args.ray_address is None:
args.ray_address = DEFAULT_RAY_ADDRESS
ray.init(address=args.ray_address)
tune.run(
run_or_experiment=args.run,
config={
@@ -38,3 +31,5 @@ if __name__ == "__main__":
},
stop=args.stop,
local_dir='./logs')
else:
print("in worker node")

View File

@@ -8,7 +8,7 @@ from azureml.core import Run
def on_train_result(info):
'''Callback on train result to record metrics returned by trainer.
'''
run = Run.get_context().parent
run = Run.get_context()
run.log(
name='episode_reward_mean',
value=info["result"]["episode_reward_mean"])

View File

@@ -84,7 +84,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646081765827
}
},
"outputs": [],
"source": [
"%matplotlib inline\n",
@@ -93,7 +97,7 @@
"import azureml.core\n",
"\n",
"# Check core SDK version number\n",
"print(\"Azure Machine Learning SDK Version: \", azureml.core.VERSION)"
"print(\"Azure Machine Learning SDK version: \", azureml.core.VERSION)"
]
},
{
@@ -107,7 +111,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646081772340
}
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
@@ -127,7 +135,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646081775643
}
},
"outputs": [],
"source": [
"from azureml.core.experiment import Experiment\n",
@@ -137,180 +149,13 @@
"exp = Experiment(workspace=ws, name=experiment_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Virtual Network and Network Security Group\n",
"\n",
"**If you are using separate compute targets for the Ray head and worker, as we do in this notebook**, a virtual network must be created in the resource group. If you have already created a virtual network in the resource group, you can skip this step.\n",
"\n",
"> Note that your user role must have permissions to create and manage virtual networks to run the cells below. Talk to your IT admin if you do not have these permissions.\n",
"\n",
"#### Create Virtual Network\n",
"To create the virtual network you first must install the [Azure Networking Python API](https://docs.microsoft.com/python/api/overview/azure/network?view=azure-python).\n",
"\n",
"`pip install --upgrade azure-mgmt-network`\n",
"\n",
"Note: In this section we are using [DefaultAzureCredential](https://docs.microsoft.com/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python)\n",
"class for authentication which, by default, examines several options in turn, and stops on the first option that provides\n",
"a token. You will need to log in using Azure CLI, if none of the other options are available (please find more details [here](https://docs.microsoft.com/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python))."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# If you need to install the Azure Networking SDK, uncomment the following line.\n",
"#!pip install --upgrade azure-mgmt-network"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azure.mgmt.network import NetworkManagementClient\n",
"from azure.identity import DefaultAzureCredential\n",
"\n",
"# Virtual network name\n",
"vnet_name =\"rl_pong_vnet\"\n",
"\n",
"# Default subnet\n",
"subnet_name =\"default\"\n",
"\n",
"# The Azure subscription you are using\n",
"subscription_id=ws.subscription_id\n",
"\n",
"# The resource group for the reinforcement learning cluster\n",
"resource_group=ws.resource_group\n",
"\n",
"# Azure region of the resource group\n",
"location=ws.location\n",
"\n",
"network_client = NetworkManagementClient(credential=DefaultAzureCredential(), subscription_id=subscription_id)\n",
"\n",
"async_vnet_creation = network_client.virtual_networks.begin_create_or_update(\n",
" resource_group,\n",
" vnet_name,\n",
" {\n",
" 'location': location,\n",
" 'address_space': {\n",
" 'address_prefixes': ['10.0.0.0/16']\n",
" }\n",
" }\n",
")\n",
"\n",
"async_vnet_creation.wait()\n",
"print(\"Virtual network created successfully: \", async_vnet_creation.result())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Set up Network Security Group on Virtual Network\n",
"\n",
"Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/azure/machine-learning/how-to-enable-virtual-network).\n",
"\n",
"A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).\n",
"\n",
"You may need to modify the code below to match your scenario."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azure.mgmt.network.models\n",
"\n",
"security_group_name = vnet_name + '-' + \"nsg\"\n",
"security_rule_name = \"AllowAML\"\n",
"\n",
"# Create a network security group\n",
"nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(\n",
" location=location,\n",
" security_rules=[\n",
" azure.mgmt.network.models.SecurityRule(\n",
" name=security_rule_name,\n",
" access=azure.mgmt.network.models.SecurityRuleAccess.allow,\n",
" description='Reinforcement Learning in Azure Machine Learning rule',\n",
" destination_address_prefix='*',\n",
" destination_port_range='29876-29877',\n",
" direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,\n",
" priority=400,\n",
" protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,\n",
" source_address_prefix='BatchNodeManagement',\n",
" source_port_range='*'\n",
" ),\n",
" ],\n",
")\n",
"\n",
"async_nsg_creation = network_client.network_security_groups.begin_create_or_update(\n",
" resource_group,\n",
" security_group_name,\n",
" nsg_params,\n",
")\n",
"\n",
"async_nsg_creation.wait() \n",
"print(\"Network security group created successfully:\", async_nsg_creation.result())\n",
"\n",
"network_security_group = network_client.network_security_groups.get(\n",
" resource_group,\n",
" security_group_name,\n",
")\n",
"\n",
"# Define a subnet to be created with network security group\n",
"subnet = azure.mgmt.network.models.Subnet(\n",
" id='default',\n",
" address_prefix='10.0.0.0/24',\n",
" network_security_group=network_security_group\n",
" )\n",
" \n",
"# Create subnet on virtual network\n",
"async_subnet_creation = network_client.subnets.begin_create_or_update(\n",
" resource_group_name=resource_group,\n",
" virtual_network_name=vnet_name,\n",
" subnet_name=subnet_name,\n",
" subnet_parameters=subnet\n",
")\n",
"\n",
"async_subnet_creation.wait()\n",
"print(\"Subnet created successfully:\", async_subnet_creation.result())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Review the virtual network security rules\n",
"Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from files.networkutils import *\n",
"from azure.identity import DefaultAzureCredential\n",
"\n",
"check_vnet_security_rules(DefaultAzureCredential(), ws.subscription_id, ws.resource_group, vnet_name, True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create compute targets\n",
"\n",
"In this example, we show how to set up separate compute targets for the Ray head and Ray worker nodes.\n",
"In this example, we show how to set up separate compute targets for the Ray nodes.\n",
"\n",
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
"\n",
@@ -322,149 +167,126 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646086081229
}
},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
"\n",
"# Choose a name for the Ray head cluster\n",
"head_compute_name = 'head-gpu'\n",
"head_compute_min_nodes = 0\n",
"head_compute_max_nodes = 2\n",
"# Choose a name for the Ray cluster\n",
"compute_name = 'compute-gpu'\n",
"compute_min_nodes = 0\n",
"compute_max_nodes = 2\n",
"\n",
"# This example uses GPU VM. For using CPU VM, set SKU to STANDARD_D2_V2\n",
"head_vm_size = 'STANDARD_NC6'\n",
"vm_size = 'STANDARD_NC6'\n",
"\n",
"if head_compute_name in ws.compute_targets:\n",
" head_compute_target = ws.compute_targets[head_compute_name]\n",
" if head_compute_target and type(head_compute_target) is AmlCompute:\n",
" if head_compute_target.provisioning_state == 'Succeeded':\n",
" print('found head compute target. just use it', head_compute_name)\n",
"if compute_name in ws.compute_targets:\n",
" compute_target = ws.compute_targets[compute_name]\n",
" if compute_target and type(compute_target) is AmlCompute:\n",
" if compute_target.provisioning_state == 'Succeeded':\n",
" print('found compute target. just use it', compute_name)\n",
" else: \n",
" raise Exception(\n",
" 'found head compute target but it is in state', head_compute_target.provisioning_state)\n",
" 'found compute target but it is in state', compute_target.provisioning_state)\n",
"else:\n",
" print('creating a new head compute target...')\n",
" print('creating a new compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(\n",
" vm_size=head_vm_size,\n",
" min_nodes=head_compute_min_nodes, \n",
" max_nodes=head_compute_max_nodes,\n",
" vnet_resourcegroup_name=ws.resource_group,\n",
" vnet_name=vnet_name,\n",
" subnet_name='default')\n",
" vm_size=vm_size,\n",
" min_nodes=compute_min_nodes, \n",
" max_nodes=compute_max_nodes,\n",
" )\n",
"\n",
" # Create the cluster\n",
" head_compute_target = ComputeTarget.create(ws, head_compute_name, provisioning_config)\n",
" compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n",
" \n",
" # Can poll for a minimum number of nodes and for a specific timeout. \n",
" # If no min node count is provided it will use the scale settings for the cluster\n",
" head_compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
" # For a more detailed view of current AmlCompute status, use get_status()\n",
" print(head_compute_target.get_status().serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Create worker compute target\n",
"\n",
"Now we create a compute target with CPUs for the additional Ray worker nodes. CPUs in these worker nodes are used by Ray worker processes. Each Ray worker node, depending on the CPUs on the node, may have multiple Ray worker processes. There can be multiple worker tasks on each worker process (core)."
" print(compute_target.get_status().serialize())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646093795069
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"# Choose a name for your Ray worker compute target\n",
"worker_compute_name = 'worker-cpu'\n",
"worker_compute_min_nodes = 0 \n",
"worker_compute_max_nodes = 4\n",
"from azureml.core import Environment\n",
"import os\n",
"\n",
"# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6\n",
"worker_vm_size = 'STANDARD_D2_V2'\n",
"ray_environment_name = 'pong-cpu'\n",
"ray_environment_dockerfile_path = os.path.join(os.getcwd(), 'docker', 'Dockerfile-cpu')\n",
"\n",
"# Create the compute target if it hasn't been created already\n",
"if worker_compute_name in ws.compute_targets:\n",
" worker_compute_target = ws.compute_targets[worker_compute_name]\n",
" if worker_compute_target and type(worker_compute_target) is AmlCompute:\n",
" if worker_compute_target.provisioning_state == 'Succeeded':\n",
" print('found worker compute target. just use it', worker_compute_name)\n",
" else: \n",
" raise Exception(\n",
" 'found worker compute target but it is in state', head_compute_target.provisioning_state)\n",
"else:\n",
" print('creating a new worker compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(\n",
" vm_size=worker_vm_size,\n",
" min_nodes=worker_compute_min_nodes,\n",
" max_nodes=worker_compute_max_nodes,\n",
" vnet_resourcegroup_name=ws.resource_group,\n",
" vnet_name=vnet_name,\n",
" subnet_name='default')\n",
"# Build CPU image\n",
"ray_cpu_env = Environment. \\\n",
" from_dockerfile(name=ray_environment_name, dockerfile=ray_environment_dockerfile_path). \\\n",
" register(workspace=ws)\n",
"ray_cpu_build_details = ray_cpu_env.build(workspace=ws)\n",
"\n",
" # Create the compute target\n",
" worker_compute_target = ComputeTarget.create(ws, worker_compute_name, provisioning_config)\n",
" \n",
" # Can poll for a minimum number of nodes and for a specific timeout. \n",
" # If no min node count is provided it will use the scale settings for the cluster\n",
" worker_compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
" # For a more detailed view of current AmlCompute status, use get_status()\n",
" print(worker_compute_target.get_status().serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train Pong Agent\n",
"To facilitate reinforcement learning, Azure Machine Learning Python SDK provides a high level abstraction, the _ReinforcementLearningEstimator_ class, which allows users to easily construct reinforcement learning run configurations for the underlying reinforcement learning framework. Reinforcement Learning in Azure Machine Learning supports the open source [Ray framework](https://ray.io/) and its highly customizable [RLLib](https://ray.readthedocs.io/en/latest/rllib.html#rllib-scalable-reinforcement-learning). In this section we show how to use _ReinforcementLearningEstimator_ and Ray/RLLib framework to train a Pong playing agent.\n",
"\n",
"\n",
"### Define worker configuration\n",
"Define a `WorkerConfiguration` using your worker compute target. We specify the number of nodes in the worker compute target to be used for training and additional PIP packages to install on those nodes as a part of setup.\n",
"In this case, we define the PIP packages as dependencies for both head and worker nodes. With this setup, the game simulations will run directly on the worker compute nodes."
"ray_cpu_build_details.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646160884910
},
"jupyter": {
"outputs_hidden": true,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"from azureml.contrib.train.rl import WorkerConfiguration\n",
"from azureml.core import Environment\n",
"\n",
"# Specify the Ray worker configuration\n",
"worker_conf = WorkerConfiguration(\n",
"ray_environment_name = 'pong-gpu'\n",
"ray_environment_dockerfile_path = os.path.join(os.getcwd(), 'docker', 'Dockerfile-gpu')\n",
"\n",
" # Azure Machine Learning compute target to run Ray workers\n",
" compute_target=worker_compute_target, \n",
"# Build GPU image\n",
"ray_gpu_env = Environment. \\\n",
" from_dockerfile(name=ray_environment_name, dockerfile=ray_environment_dockerfile_path). \\\n",
" register(workspace=ws)\n",
"ray_gpu_build_details = ray_gpu_env.build(workspace=ws)\n",
"\n",
" # Number of worker nodes\n",
" node_count=4,\n",
" \n",
" # GPU\n",
" use_gpu=False, \n",
" \n",
" # Shared memory size\n",
" # Uncomment line below to set shm_size for workers (requires Azure Machine Learning SDK 1.33 or greater)\n",
" # shm_size=1024*1024*1024, \n",
" \n",
" # PIP packages to use\n",
")"
"ray_gpu_build_details.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create reinforcement learning estimator\n",
"### Create reinforcement learning training run\n",
"\n",
"The `ReinforcementLearningEstimator` is used to submit a job to Azure Machine Learning to start the Ray experiment run. We define the training script parameters here that will be passed to the estimator. \n",
"The code below submits the training run using a `ScriptRunConfig`. By providing the\n",
"command to run the training, and a `RunConfig` object configured with your\n",
"compute target, number of nodes, and environment image to use.\n",
"\n",
"We specify `episode_reward_mean` to 18 as we want to stop the training as soon as the trained agent reaches an average win margin of at least 18 point over opponent over all episodes in the training epoch.\n",
"Number of Ray worker processes are defined by parameter `num_workers`. We set it to 13 as we have 13 CPUs available in our compute targets. Multiple Ray worker processes parallelizes agent training and helps in achieving our goal faster. \n",
@@ -479,70 +301,44 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646162435310
}
},
"outputs": [],
"source": [
"from azureml.contrib.train.rl import ReinforcementLearningEstimator, Ray\n",
"from azureml.core import RunConfiguration, ScriptRunConfig, Experiment\n",
"from azureml.core.runconfig import DockerConfiguration, RunConfiguration\n",
"\n",
"experiment_name = 'rllib-pong-multi-node'\n",
"\n",
"experiment = Experiment(workspace=ws, name=experiment_name)\n",
"ray_environment = Environment.get(workspace=ws, name=ray_environment_name)\n",
"\n",
"aml_run_config_ml = RunConfiguration(communicator='OpenMpi')\n",
"aml_run_config_ml.target = compute_target\n",
"aml_run_config_ml.docker = DockerConfiguration(use_docker=True)\n",
"aml_run_config_ml.node_count = 2\n",
"aml_run_config_ml.environment = ray_environment\n",
"\n",
"training_algorithm = \"IMPALA\"\n",
"rl_environment = \"PongNoFrameskip-v4\"\n",
"script_name='pong_rllib.py'\n",
"\n",
"# Training script parameters\n",
"script_params = {\n",
"command=[\n",
" 'python', script_name,\n",
" '--run', training_algorithm,\n",
" '--env', rl_environment,\n",
" '--config', '\\'{\"num_gpus\": 1, \"num_workers\": 11}\\'',\n",
" '--stop', '\\'{\"episode_reward_mean\": 18, \"time_total_s\": 3600}\\''\n",
"]\n",
"\n",
" # Training algorithm, IMPALA in this case\n",
" \"--run\": training_algorithm,\n",
" \n",
" # Environment, Pong in this case\n",
" \"--env\": rl_environment,\n",
" \n",
" # Add additional single quotes at the both ends of string values as we have spaces in the \n",
" # string parameters, outermost quotes are not passed to scripts as they are not actually part of string\n",
" # Number of GPUs\n",
" # Number of ray workers\n",
" \"--config\": '\\'{\"num_gpus\": 1, \"num_workers\": 13}\\'',\n",
" \n",
" # Target episode reward mean to stop the training\n",
" # Total training time in seconds\n",
" \"--stop\": '\\'{\"episode_reward_mean\": 18, \"time_total_s\": 3600}\\'',\n",
"}\n",
"\n",
"# Reinforcement learning estimator\n",
"rl_estimator = ReinforcementLearningEstimator(\n",
" \n",
" # Location of source files\n",
" source_directory='files',\n",
" \n",
" # Python script file\n",
" entry_script=\"pong_rllib.py\",\n",
" \n",
" # Parameters to pass to the script file\n",
" # Defined above.\n",
" script_params=script_params,\n",
" \n",
" # The Azure Machine Learning compute target set up for Ray head nodes\n",
" compute_target=head_compute_target,\n",
" \n",
" # GPU usage\n",
" use_gpu=True,\n",
" \n",
" # Reinforcement learning framework. Currently must be Ray.\n",
" rl_framework=Ray('0.8.3'),\n",
" \n",
" # Ray worker configuration defined above.\n",
" worker_configuration=worker_conf,\n",
" \n",
" # How long to wait for whole cluster to start\n",
" cluster_coordination_timeout_seconds=3600,\n",
" \n",
" # Maximum time for the whole Ray job to run\n",
" # This will cut off the run after an hour\n",
" max_run_duration_seconds=3600,\n",
" \n",
" # Allow the docker container Ray runs in to make full use\n",
" # of the shared memory available from the host OS.\n",
" shm_size=24*1024*1024*1024\n",
")"
"config = ScriptRunConfig(source_directory='./files',\n",
" command=command,\n",
" run_config = aml_run_config_ml\n",
" )\n",
"training_run = experiment.submit(config)"
]
},
{
@@ -571,23 +367,6 @@
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the estimator to start a run\n",
"Now we use the rl_estimator configured above to submit a run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"run = exp.submit(config=rl_estimator)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -605,7 +384,7 @@
"source": [
"from azureml.widgets import RunDetails\n",
"\n",
"RunDetails(run).show()"
"RunDetails(training_run).show()"
]
},
{
@@ -614,7 +393,7 @@
"source": [
"### Stop the run\n",
"\n",
"To stop the run, call `run.cancel()`."
"To stop the run, call `training_run.cancel()`."
]
},
{
@@ -624,7 +403,7 @@
"outputs": [],
"source": [
"# Uncomment line below to cancel the run\n",
"# run.cancel()"
"# training_run.cancel()"
]
},
{
@@ -643,7 +422,7 @@
"metadata": {},
"outputs": [],
"source": [
"run.wait_for_completion()"
"training_run.wait_for_completion()"
]
},
{
@@ -663,8 +442,8 @@
"metadata": {},
"outputs": [],
"source": [
"# Get the reward metrics from worker run\n",
"episode_reward_mean = run.get_metrics(name='episode_reward_mean')"
"# Get the reward metrics from training_run\n",
"episode_reward_mean = training_run.get_metrics(name='episode_reward_mean')"
]
},
{
@@ -733,6 +512,16 @@
"name": "vineetg"
}
],
"categories": [
"how-to-use-azureml",
"reinforcement-learning"
],
"interpreter": {
"hash": "13382f70c1d0595120591d2e358c8d446daf961bf951d1fba9a32631e205d5ab"
},
"kernel_info": {
"name": "python3-azureml"
},
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -748,10 +537,13 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
"version": "3.8.0"
},
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00e2\u20ac\u00afLicensed under the MIT License.\u00e2\u20ac\u00af "
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00afLicensed under the MIT License.\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00af ",
"nteract": {
"version": "nteract-front-end@1.0.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 0
}

View File

@@ -82,7 +82,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646344676671
}
},
"outputs": [],
"source": [
"import azureml.core\n",
@@ -100,7 +104,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646344680982
}
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
@@ -123,7 +131,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646344684217
}
},
"outputs": [],
"source": [
"import os.path\n",
@@ -146,7 +158,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646344690768
}
},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeInstance\n",
@@ -194,13 +210,52 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646344835579
}
},
"outputs": [],
"source": [
"from azureml.core.experiment import Experiment\n",
"\n",
"experiment_name = 'CartPole-v0-CI'\n",
"exp = Experiment(workspace=ws, name=experiment_name)"
"experiment = Experiment(workspace=ws, name=experiment_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1646346293902
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
"import os\n",
"import time\n",
"\n",
"ray_environment_name = 'cartpole-ray-ci'\n",
"ray_environment_dockerfile_path = os.path.join(os.getcwd(), 'files', 'docker', 'Dockerfile')\n",
"\n",
"# Build environment image\n",
"ray_environment = Environment. \\\n",
" from_dockerfile(name=ray_environment_name, dockerfile=ray_environment_dockerfile_path). \\\n",
" register(workspace=ws)\n",
"ray_env_build_details = ray_environment.build(workspace=ws)\n",
"\n",
"ray_env_build_details.wait_for_completion(show_output=True)"
]
},
{
@@ -208,80 +263,69 @@
"metadata": {},
"source": [
"## Train Cartpole Agent\n",
"To facilitate reinforcement learning, Azure Machine Learning Python SDK provides a high level abstraction, the _ReinforcementLearningEstimator_ class, which allows users to easily construct reinforcement learning run configurations for the underlying reinforcement learning framework. Reinforcement Learning in Azure Machine Learning supports the open source [Ray framework](https://ray.io/) and its highly customizable [RLlib](https://ray.readthedocs.io/en/latest/rllib.html#rllib-scalable-reinforcement-learning). In this section we show how to use _ReinforcementLearningEstimator_ and Ray/RLlib framework to train a cartpole playing agent. "
"In this section, we show how to use Azure Machine Learning jobs and Ray/RLlib framework to train a cartpole playing agent. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create reinforcement learning estimator\n",
"### Create reinforcement learning training run\n",
"\n",
"The code below creates an instance of *ReinforcementLearningEstimator*, `training_estimator`, which then will be used to submit a job to Azure Machine Learning to start the Ray experiment run.\n",
"\n",
"Note that this example is purposely simplified to the minimum. Here is a short description of the parameters we are passing into the constructor:\n",
"\n",
"- `source_directory`, local directory containing your training script(s) and helper modules,\n",
"- `entry_script`, path to your entry script relative to the source directory,\n",
"- `script_params`, constant parameters to be passed to each run of training script,\n",
"- `compute_target`, reference to the compute target in which the trainer and worker(s) jobs will be executed,\n",
"- `rl_framework`, the reinforcement learning framework to be used (currently must be Ray).\n",
"\n",
"We use the `script_params` parameter to pass in general and algorithm-specific parameters to the training script.\n"
"The code below submits the training run using a `ScriptRunConfig`. By providing the\n",
"command to run the training, and a `RunConfig` object configured with your\n",
"compute target, number of nodes, and environment image to use."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347120585
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"from azureml.contrib.train.rl import ReinforcementLearningEstimator, Ray\n",
"from azureml.core import Environment\n",
"from azureml.core import RunConfiguration, ScriptRunConfig, Experiment\n",
"from azureml.core.runconfig import DockerConfiguration, RunConfiguration\n",
"\n",
"training_algorithm = \"PPO\"\n",
"rl_environment = \"CartPole-v0\"\n",
"training_algorithm = 'PPO'\n",
"rl_environment = 'CartPole-v0'\n",
"\n",
"script_params = {\n",
"script_name = 'cartpole_training.py'\n",
"script_arguments = [\n",
" '--run', training_algorithm,\n",
" '--env', rl_environment,\n",
" '--config', '{\"num_gpus\": 0, \"num_workers\": 1}',\n",
" '--stop', '{\"episode_reward_mean\": 200, \"time_total_s\": 300}',\n",
" '--checkpoint-freq', '2',\n",
" '--checkpoint-at-end',\n",
" '--local-dir', './logs'\n",
"]\n",
"\n",
" # Training algorithm\n",
" \"--run\": training_algorithm,\n",
"aml_run_config_ml = RunConfiguration(communicator='OpenMpi')\n",
"aml_run_config_ml.target = compute_target\n",
"aml_run_config_ml.docker = DockerConfiguration(use_docker=True)\n",
"aml_run_config_ml.node_count = 1\n",
"aml_run_config_ml.environment = ray_environment\n",
"\n",
" # Training environment\n",
" \"--env\": rl_environment,\n",
" \n",
" # Algorithm-specific parameters\n",
" \"--config\": '\\'{\"num_gpus\": 0, \"num_workers\": 1}\\'',\n",
" \n",
" # Stop conditions\n",
" \"--stop\": '\\'{\"episode_reward_mean\": 200, \"time_total_s\": 300}\\'',\n",
" \n",
" # Frequency of taking checkpoints\n",
" \"--checkpoint-freq\": 2,\n",
" \n",
" # If a checkpoint should be taken at the end - optional argument with no value\n",
" \"--checkpoint-at-end\": \"\",\n",
" \n",
" # Log directory\n",
" \"--local-dir\": './logs'\n",
"}\n",
"\n",
"training_estimator = ReinforcementLearningEstimator(\n",
"\n",
" # Location of source files\n",
" source_directory='files',\n",
" \n",
" # Python script file\n",
" entry_script='cartpole_training.py',\n",
" \n",
" # A dictionary of arguments to pass to the training script specified in ``entry_script``\n",
" script_params=script_params,\n",
" \n",
" # The Azure Machine Learning compute target set up for Ray head nodes\n",
" compute_target=compute_target,\n",
" \n",
" # Reinforcement learning framework. Currently must be Ray.\n",
" rl_framework=Ray()\n",
")"
"training_config = ScriptRunConfig(source_directory='./files',\n",
" script=script_name,\n",
" arguments=script_arguments,\n",
" run_config = aml_run_config_ml\n",
" )\n",
"training_run = experiment.submit(training_config)"
]
},
{
@@ -304,6 +348,7 @@
"See [RLlib Training APIs](https://ray.readthedocs.io/en/latest/rllib-training.html#rllib-training-apis) for more details, and also [Training (tune.run, tune.Experiment)](https://ray.readthedocs.io/en/latest/tune/api_docs/execution.html#training-tune-run-tune-experiment) for the complete list of parameters.\n",
"\n",
"```python\n",
"import os\n",
"import ray\n",
"import ray.tune as tune\n",
"\n",
@@ -311,8 +356,9 @@
"\n",
" # parse arguments ...\n",
" \n",
" # Intitialize ray\n",
" ay.init(address=args.ray_address)\n",
" # Start ray head (single node)\n",
" os.system('ray start --head')\n",
" ray.init(address='auto')\n",
"\n",
" # Run training task using tune.run\n",
" tune.run(\n",
@@ -326,23 +372,6 @@
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the estimator to start experiment\n",
"Now we use the *training_estimator* to submit a run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_run = exp.submit(training_estimator)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -350,15 +379,17 @@
"### Monitor experiment\n",
"Azure Machine Learning provides a Jupyter widget to show the status of an experiment run. You could use this widget to monitor the status of the runs.\n",
"\n",
"Note that _ReinforcementLearningEstimator_ creates at least two runs: (a) A parent run, i.e. the run returned above, and (b) a collection of child runs. The number of the child runs depends on the configuration of the reinforcement learning estimator. In our simple scenario, configured above, only one child run will be created.\n",
"\n",
"The widget will show a list of the child runs as well. You can click on the link under **Status** to see the details of a child run. It will also show the metrics being logged."
"You can click on the link under **Status** to see the details of a child run. It will also show the metrics being logged."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347127671
}
},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
@@ -398,50 +429,23 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347318682
}
},
"outputs": [],
"source": [
"training_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get a handle to the child run\n",
"You can obtain a handle to the child run as follows. In our scenario, there is only one child run, we have it called `child_run_0`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"\n",
"child_run_0 = None\n",
"timeout = 30\n",
"while timeout > 0 and not child_run_0:\n",
" child_runs = list(training_run.get_children())\n",
" print('Number of child runs:', len(child_runs))\n",
" if len(child_runs) > 0:\n",
" child_run_0 = child_runs[0]\n",
" break\n",
" time.sleep(2) # Wait for 2 seconds\n",
" timeout -= 2\n",
"\n",
"print('Child run info:')\n",
"print(child_run_0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Evaluate Trained Agent and See Results\n",
"\n",
"We can evaluate a previously trained policy using the `rollout.py` helper script provided by RLlib (see [Evaluating Trained Policies](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies) for more details). Here we use an adaptation of this script to reconstruct a policy from a checkpoint taken and saved during training. We took these checkpoints by setting `checkpoint-freq` and `checkpoint-at-end` parameters above.\n",
"We can evaluate a previously trained policy using the `cartpole_rollout.py` helper script provided by RLlib (see [Evaluating Trained Policies](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies) for more details). Here we use an adaptation of this script to reconstruct a policy from a checkpoint taken and saved during training. We took these checkpoints by setting `checkpoint-freq` and `checkpoint-at-end` parameters above.\n",
"\n",
"In this section we show how to get access to these checkpoints data, and then how to use them to evaluate the trained policy."
]
@@ -458,7 +462,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347328505
}
},
"outputs": [],
"source": [
"from os import path\n",
@@ -471,7 +479,7 @@
" dir_util.remove_tree(training_artifacts_path)\n",
"\n",
"# Download run artifacts to local compute\n",
"child_run_0.download_files(training_artifacts_path)"
"training_run.download_files(training_artifacts_path)"
]
},
{
@@ -484,7 +492,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347334571
}
},
"outputs": [],
"source": [
"# A helper function to find checkpoint files in a directory\n",
@@ -501,7 +513,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347337724
}
},
"outputs": [],
"source": [
"# Find checkpoints and last checkpoint number\n",
@@ -529,14 +545,18 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347346085
}
},
"outputs": [],
"source": [
"# Upload the checkpoint files and create a DataSet\n",
"from azureml.core import Dataset\n",
"\n",
"datastore = ws.get_default_datastore()\n",
"checkpoint_dataref = datastore.upload_files(checkpoint_files, target_path='cartpole_checkpoints_' + run_id, overwrite=True)\n",
"checkpoint_dataref = datastore.upload_files(checkpoint_files, target_path='cartpole_checkpoints_' + training_run.id, overwrite=True)\n",
"checkpoint_ds = Dataset.File.from_files(checkpoint_dataref)"
]
},
@@ -550,7 +570,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347354726
}
},
"outputs": [],
"source": [
"artifacts_paths = checkpoint_ds.to_path()\n",
@@ -564,82 +588,67 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Evaluate a trained policy\n",
"We need to configure another reinforcement learning estimator, `rollout_estimator`, and then use it to submit another run. Note that the entry script for this estimator now points to `cartpole-rollout.py` script.\n",
"Also note how we pass the checkpoints dataset to this script using `inputs` parameter of the _ReinforcementLearningEstimator_.\n",
"## Evaluate Trained Agent and See Results\n",
"\n",
"We are using script parameters to pass in the same algorithm and the same environment used during training. We also specify the checkpoint number of the checkpoint we wish to evaluate, `checkpoint-number`, and number of the steps we shall run the rollout, `steps`.\n",
"\n",
"The checkpoints dataset will be accessible to the rollout script as a mounted folder. The mounted folder and the checkpoint number, passed in via `checkpoint-number`, will be used to create a path to the checkpoint we are going to evaluate. The created checkpoint path then will be passed into RLlib rollout script for evaluation.\n",
"\n",
"Now let's configure rollout estimator. Note that we use the last checkpoint for evaluation. The assumption is that the last checkpoint points to our best trained agent. You may change this to any of the checkpoint numbers printed above and observe the effect."
"We can evaluate a previously trained policy using the `cartpole_rollout.py` helper script provided by RLlib (see [Evaluating Trained Policies](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies) for more details). Here we use an adaptation of this script to reconstruct a policy from a checkpoint taken and saved during training. We took these checkpoints by setting `checkpoint-freq` and `checkpoint-at-end` parameters above.\n",
"In this section we show how to use these checkpoints to evaluate the trained policy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347414835
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"script_params = { \n",
" # Checkpoint number of the checkpoint from which to roll out\n",
" \"--checkpoint-number\": last_checkpoint_number,\n",
"ray_environment_name = 'cartpole-ray-ci'\n",
"\n",
" # Training algorithm\n",
" \"--run\": training_algorithm,\n",
"experiment_name = 'CartPole-v0-CI'\n",
"training_algorithm = 'PPO'\n",
"rl_environment = 'CartPole-v0'\n",
"\n",
" # Training environment\n",
" \"--env\": rl_environment,\n",
"experiment = Experiment(workspace=ws, name=experiment_name)\n",
"ray_environment = Environment.get(workspace=ws, name=ray_environment_name)\n",
"\n",
" # Algorithm-specific parameters\n",
" \"--config\": '{}',\n",
"script_name = 'cartpole_rollout.py'\n",
"script_arguments = [\n",
" '--run', training_algorithm,\n",
" '--env', rl_environment,\n",
" '--config', '{}',\n",
" '--steps', '2000',\n",
" '--checkpoint-number', str(last_checkpoint_number),\n",
" '--no-render',\n",
" '--artifacts-dataset', checkpoint_ds.as_named_input('artifacts_dataset'),\n",
" '--artifacts-path', checkpoint_ds.as_named_input('artifacts_path').as_mount()\n",
"]\n",
"\n",
" # Number of rollout steps \n",
" \"--steps\": 2000,\n",
"aml_run_config_ml = RunConfiguration(communicator='OpenMpi')\n",
"aml_run_config_ml.target = compute_target\n",
"aml_run_config_ml.docker = DockerConfiguration(use_docker=True)\n",
"aml_run_config_ml.node_count = 1\n",
"aml_run_config_ml.environment = ray_environment\n",
"aml_run_config_ml.data\n",
"\n",
" # If should repress rendering of the environment\n",
" \"--no-render\": \"\"\n",
"}\n",
"rollout_config = ScriptRunConfig(\n",
" source_directory='./files',\n",
" script=script_name,\n",
" arguments=script_arguments,\n",
" run_config = aml_run_config_ml\n",
" )\n",
" \n",
"rollout_estimator = ReinforcementLearningEstimator(\n",
" # Location of source files\n",
" source_directory='files',\n",
" \n",
" # Python script file\n",
" entry_script='cartpole_rollout.py',\n",
" \n",
" # A dictionary of arguments to pass to the rollout script specified in ``entry_script``\n",
" script_params = script_params,\n",
" \n",
" # Data inputs\n",
" inputs=[\n",
" checkpoint_ds.as_named_input('artifacts_dataset'),\n",
" checkpoint_ds.as_named_input('artifacts_path').as_mount()],\n",
" \n",
" # The Azure Machine Learning compute target\n",
" compute_target=compute_target,\n",
" \n",
" # Reinforcement learning framework. Currently must be Ray.\n",
" rl_framework=Ray(),\n",
" \n",
" # Additional pip packages to install\n",
" pip_packages = ['azureml-dataset-runtime[fuse,pandas]'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Same as before, we use the *rollout_estimator* to submit a run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rollout_run = exp.submit(rollout_estimator)"
"rollout_run = experiment.submit(rollout_config)"
]
},
{
@@ -652,7 +661,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347429626
}
},
"outputs": [],
"source": [
"RunDetails(rollout_run).show()"
@@ -717,6 +730,16 @@
"name": "hoazari"
}
],
"categories": [
"how-to-use-azureml",
"reinforcement-learning"
],
"interpreter": {
"hash": "13382f70c1d0595120591d2e358c8d446daf961bf951d1fba9a32631e205d5ab"
},
"kernel_info": {
"name": "python3-azureml"
},
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -732,10 +755,20 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
"version": "3.7.9"
},
"notice": "Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License."
"microsoft": {
"host": {
"AzureML": {
"notebookHasBeenCompleted": true
}
}
},
"notice": "Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.",
"nteract": {
"version": "nteract-front-end@1.0.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 0
}

View File

@@ -1,4 +1,3 @@
import argparse
import os
import sys
@@ -11,10 +10,7 @@ from azureml.core import Run
from utils import callbacks
DEFAULT_RAY_ADDRESS = 'localhost:6379'
def run_rollout(args, parser, ray_address):
def run_rollout(args, parser):
config = args.config
if not args.env:
@@ -22,8 +18,6 @@ def run_rollout(args, parser, ray_address):
parser.error("the following arguments are required: --env")
args.env = config.get("env")
ray.init(address=ray_address)
# Create the Trainer from config.
cls = get_trainable_cls(args.run)
agent = cls(env=args.env, config=config)
@@ -76,6 +70,10 @@ def run_rollout(args, parser, ray_address):
if __name__ == "__main__":
# Start ray head (single node)
os.system('ray start --head')
ray.init(address='auto')
# Add positional argument - serves as placeholder for checkpoint
argvc = sys.argv[1:]
argvc.insert(0, 'checkpoint-placeholder')
@@ -88,8 +86,12 @@ if __name__ == "__main__":
help='Checkpoint number of the checkpoint from which to roll out')
rollout_parser.add_argument(
'--ray-address', required=False, default=DEFAULT_RAY_ADDRESS,
help='The address of the Ray cluster to connect to')
'--artifacts-dataset', required=True,
help='The checkpoints artifacts dataset')
rollout_parser.add_argument(
'--artifacts-path', required=True,
help='The checkpoints artifacts path')
args = rollout_parser.parse_args(argvc)
@@ -116,4 +118,4 @@ if __name__ == "__main__":
args.checkpoint = checkpoint
# Start rollout
run_rollout(args, rollout_parser, args.ray_address)
run_rollout(args, rollout_parser)

View File

@@ -1,17 +1,10 @@
import argparse
import os
import sys
import ray
from ray.rllib import train
from ray import tune
import os
from utils import callbacks
DEFAULT_RAY_ADDRESS = 'localhost:6379'
if __name__ == "__main__":
# Parse arguments and add callbacks to config
@@ -24,11 +17,9 @@ if __name__ == "__main__":
if 'monitor' in args.config and args.config['monitor']:
print("Video capturing is ON!")
# Start (connect to) Ray cluster
if args.ray_address is None:
args.ray_address = DEFAULT_RAY_ADDRESS
ray.init(address=args.ray_address)
# Start ray head (single node)
os.system('ray start --head')
ray.init(address='auto')
# Run training task using tune.run
tune.run(

View File

@@ -0,0 +1,17 @@
FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04
RUN pip install ray-on-aml==0.1.6
RUN pip install gym[atari]==0.19.0
RUN pip install gym[accept-rom-license]==0.19.0
RUN pip install ale-py==0.7.0
RUN pip install azureml-core
RUN pip install azureml-dataset-runtime
RUN pip install ray==0.8.7
RUN pip install ray[rllib,tune,serve]==0.8.7
RUN pip install tensorflow==1.14.0
USER root
RUN apt-get update
RUN apt-get install -y jq
RUN apt-get install -y rsync

View File

@@ -8,7 +8,7 @@ from azureml.core import Run
def on_train_result(info):
'''Callback on train result to record metrics returned by trainer.
'''
run = Run.get_context().parent
run = Run.get_context()
run.log(
name='episode_reward_mean',
value=info["result"]["episode_reward_mean"])

View File

@@ -82,7 +82,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646347616697
}
},
"outputs": [],
"source": [
"import azureml.core\n",
@@ -101,7 +105,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646429058500
}
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
@@ -126,7 +134,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646359152101
}
},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
@@ -167,13 +179,51 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646348040613
}
},
"outputs": [],
"source": [
"from azureml.core.experiment import Experiment\n",
"\n",
"experiment_name = 'CartPole-v0-SC'\n",
"exp = Experiment(workspace=ws, name=experiment_name)"
"experiment = Experiment(workspace=ws, name=experiment_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1646417962898
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
"import os\n",
"\n",
"ray_environment_name = 'cartpole-ray-sc'\n",
"ray_environment_dockerfile_path = os.path.join(os.getcwd(), 'files', 'docker', 'Dockerfile')\n",
"\n",
"# Build environment image\n",
"ray_environment = Environment. \\\n",
" from_dockerfile(name=ray_environment_name, dockerfile=ray_environment_dockerfile_path). \\\n",
" register(workspace=ws)\n",
"ray_env_build_details = ray_environment.build(workspace=ws)\n",
"\n",
"ray_env_build_details.wait_for_completion(show_output=True)"
]
},
{
@@ -181,109 +231,79 @@
"metadata": {},
"source": [
"## Train Cartpole Agent\n",
"To facilitate reinforcement learning, Azure Machine Learning Python SDK provides a high level abstraction, the _ReinforcementLearningEstimator_ class, which allows users to easily construct reinforcement learning run configurations for the underlying reinforcement learning framework. Reinforcement Learning in Azure Machine Learning supports the open source [Ray framework](https://ray.io/) and its highly customizable [RLlib](https://ray.readthedocs.io/en/latest/rllib.html#rllib-scalable-reinforcement-learning). In this section we show how to use _ReinforcementLearningEstimator_ and Ray/RLlib framework to train a cartpole playing agent. "
"In this section, we show how to use Azure Machine Learning jobs and Ray/RLlib framework to train a cartpole playing agent. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create reinforcement learning estimator\n",
"### Create reinforcement learning training run\n",
"\n",
"The code below creates an instance of *ReinforcementLearningEstimator*, `training_estimator`, which then will be used to submit a job to Azure Machine Learning to start the Ray experiment run.\n",
"\n",
"Note that this example is purposely simplified to the minimum. Here is a short description of the parameters we are passing into the constructor:\n",
"\n",
"- `source_directory`, local directory containing your training script(s) and helper modules,\n",
"- `entry_script`, path to your entry script relative to the source directory,\n",
"- `script_params`, constant parameters to be passed to each run of training script,\n",
"- `compute_target`, reference to the compute target in which the trainer and worker(s) jobs will be executed,\n",
"- `rl_framework`, the reinforcement learning framework to be used (currently must be Ray).\n",
"\n",
"We use the `script_params` parameter to pass in general and algorithm-specific parameters to the training script.\n"
"The code below submits the training run using a `ScriptRunConfig`. By providing the\n",
"command to run the training, and a `RunConfig` object configured with your\n",
"compute target, number of nodes, and environment image to use."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646437786449
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"from azureml.contrib.train.rl import ReinforcementLearningEstimator, Ray\n",
"from azureml.core.environment import Environment\n",
"from azureml.core import RunConfiguration, ScriptRunConfig, Experiment\n",
"from azureml.core.runconfig import DockerConfiguration, RunConfiguration\n",
"\n",
"training_algorithm = \"PPO\"\n",
"rl_environment = \"CartPole-v0\"\n",
"video_capture = True\n",
"\n",
"if video_capture:\n",
" algorithm_config = '\\'{\"num_gpus\": 0, \"num_workers\": 1, \"monitor\": true}\\''\n",
"else:\n",
" algorithm_config = '\\'{\"num_gpus\": 0, \"num_workers\": 1, \"monitor\": false}\\''\n",
"\n",
"script_params = {\n",
"script_name = 'cartpole_training.py'\n",
"script_arguments = [\n",
" '--run', training_algorithm,\n",
" '--env', rl_environment,\n",
" '--stop', '\\'{\"episode_reward_mean\": 200, \"time_total_s\": 300}\\'',\n",
" '--config', algorithm_config,\n",
" '--checkpoint-freq', '2',\n",
" '--checkpoint-at-end',\n",
" '--local-dir', './logs'\n",
"]\n",
"\n",
" # Training algorithm\n",
" \"--run\": training_algorithm,\n",
"ray_environment = Environment.get(ws, name=ray_environment_name)\n",
"run_config = RunConfiguration(communicator='OpenMpi')\n",
"run_config.target = compute_target\n",
"run_config.docker = DockerConfiguration(use_docker=True)\n",
"run_config.node_count = 1\n",
"run_config.environment = ray_environment\n",
"command=[\"python\", script_name, *script_arguments]\n",
"\n",
" # Training environment\n",
" \"--env\": rl_environment,\n",
" \n",
" # Algorithm-specific parameters\n",
" \"--config\": algorithm_config,\n",
" \n",
" # Stop conditions\n",
" \"--stop\": '\\'{\"episode_reward_mean\": 200, \"time_total_s\": 300}\\'',\n",
" \n",
" # Frequency of taking checkpoints\n",
" \"--checkpoint-freq\": 2,\n",
" \n",
" # If a checkpoint should be taken at the end - optional argument with no value\n",
" \"--checkpoint-at-end\": \"\",\n",
" \n",
" # Log directory\n",
" \"--local-dir\": './logs'\n",
"}\n",
"\n",
"xvfb_env = None\n",
"if video_capture:\n",
" # Ray's video capture support requires to run everything under a headless display driver called (xvfb).\n",
" # There are two parts to this:\n",
" # 1. Use a custom docker file with proper instructions to install xvfb, ffmpeg, python-opengl\n",
" # and other dependencies.\n",
" command = [\"xvfb-run -s '-screen 0 640x480x16 -ac +extension GLX +render' \"] + command\n",
" run_config.environment_variables[\"SDL_VIDEODRIVER\"] = \"dummy\"\n",
"\n",
" with open(\"files/docker/Dockerfile\", \"r\") as f:\n",
" dockerfile=f.read()\n",
"trainint_config = ScriptRunConfig(source_directory='./files',\n",
" command=command,\n",
" run_config = run_config\n",
" )\n",
"\n",
" xvfb_env = Environment(name='xvfb-vdisplay')\n",
" xvfb_env.docker.base_image = None\n",
" xvfb_env.docker.base_dockerfile = dockerfile\n",
" \n",
" # 2. Execute the Python process via the xvfb-run command to set up the headless display driver.\n",
" xvfb_env.python.user_managed_dependencies = True\n",
" xvfb_env.python.interpreter_path = \"xvfb-run -s '-screen 0 640x480x16 -ac +extension GLX +render' python\"\n",
"\n",
"\n",
"training_estimator = ReinforcementLearningEstimator(\n",
"\n",
" # Location of source files\n",
" source_directory='files',\n",
" \n",
" # Python script file\n",
" entry_script='cartpole_training.py',\n",
" \n",
" # A dictionary of arguments to pass to the training script specified in ``entry_script``\n",
" script_params=script_params,\n",
" \n",
" # The Azure Machine Learning compute target set up for Ray head nodes\n",
" compute_target=compute_target,\n",
" \n",
" # Reinforcement learning framework. Currently must be Ray.\n",
" rl_framework=Ray(),\n",
" \n",
" # Custom environmnet for Xvfb\n",
" environment=xvfb_env\n",
")"
"training_run = experiment.submit(trainint_config)"
]
},
{
@@ -313,8 +333,9 @@
"\n",
" # parse arguments ...\n",
" \n",
" # Intitialize ray\n",
" ray.init(address=args.ray_address)\n",
" # Start ray head (single node)\n",
" os.system('ray start --head')\n",
" ray.init(address='auto')\n",
"\n",
" # Run training task using tune.run\n",
" tune.run(\n",
@@ -328,40 +349,23 @@
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Submit the estimator to start experiment\n",
"Now we use the *training_estimator* to submit a run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"training_run = exp.submit(training_estimator)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Monitor experiment\n",
"\n",
"Azure Machine Learning provides a Jupyter widget to show the status of an experiment run. You could use this widget to monitor the status of the runs.\n",
"\n",
"Note that _ReinforcementLearningEstimator_ creates at least two runs: (a) A parent run, i.e. the run returned above, and (b) a collection of child runs. The number of the child runs depends on the configuration of the reinforcement learning estimator. In our simple scenario, configured above, only one child run will be created.\n",
"\n",
"The widget will show a list of the child runs as well. You can click on the link under **Status** to see the details of a child run. It will also show the metrics being logged."
"Azure Machine Learning provides a Jupyter widget to show the status of an experiment run. You could use this widget to monitor the status of the runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646437627002
}
},
"outputs": [],
"source": [
"from azureml.widgets import RunDetails\n",
@@ -406,37 +410,6 @@
"training_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get a handle to the child run\n",
"You can obtain a handle to the child run as follows. In our scenario, there is only one child run, we have it called `child_run_0`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"\n",
"child_run_0 = None\n",
"timeout = 30\n",
"while timeout > 0 and not child_run_0:\n",
" child_runs = list(training_run.get_children())\n",
" print('Number of child runs:', len(child_runs))\n",
" if len(child_runs) > 0:\n",
" child_run_0 = child_runs[0]\n",
" break\n",
" time.sleep(2) # Wait for 2 seconds\n",
" timeout -= 2\n",
"\n",
"print('Child run info:')\n",
"print(child_run_0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -453,8 +426,8 @@
"source": [
"from azureml.core import Run\n",
"\n",
"run_id = child_run_0.id # Or set to run id of a completed run (e.g. 'rl-cartpole-v0_1587572312_06e04ace_head')\n",
"child_run_0 = Run(exp, run_id=run_id)"
"run_id = training_run.id # Or set to run id of a completed run (e.g. 'rl-cartpole-v0_1587572312_06e04ace_head')\n",
"run = Run(experiment, run_id=run_id)"
]
},
{
@@ -467,7 +440,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646437652309
}
},
"outputs": [],
"source": [
"from os import path\n",
@@ -480,7 +457,7 @@
" dir_util.remove_tree(training_artifacts_path)\n",
"\n",
"# Download run artifacts to local compute\n",
"child_run_0.download_files(training_artifacts_path)"
"training_run.download_files(training_artifacts_path)"
]
},
{
@@ -497,7 +474,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646437657045
}
},
"outputs": [],
"source": [
"import shutil\n",
@@ -516,15 +497,10 @@
"\n",
"\n",
"# A helper function to display a movie\n",
"from IPython.core.display import display, HTML\n",
"from IPython.core.display import Video\n",
"from IPython.display import display\n",
"def display_movie(movie_file):\n",
" display(\n",
" HTML('\\\n",
" <video alt=\"cannot display video\" autoplay loop> \\\n",
" <source src=\"{}\" type=\"video/mp4\"> \\\n",
" </video>'.format(movie_file)\n",
" )\n",
" )"
" display(Video(movie_file, embed=True, html_attributes='controls'))"
]
},
{
@@ -537,7 +513,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646437690241
}
},
"outputs": [],
"source": [
"mp4_files = find_movies(training_artifacts_path)\n",
@@ -554,7 +534,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646437692954
}
},
"outputs": [],
"source": [
"first_movie = mp4_files[0] if len(mp4_files) > 0 else None\n",
@@ -573,7 +557,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646437717147
}
},
"outputs": [],
"source": [
"last_movie = mp4_files[-1] if len(mp4_files) > 0 else None\n",
@@ -597,8 +585,8 @@
"metadata": {},
"source": [
"### Evaluate a trained policy\n",
"We need to configure another reinforcement learning estimator, `rollout_estimator`, and then use it to submit another run. Note that the entry script for this estimator now points to `cartpole-rollout.py` script.\n",
"Also note how we pass the checkpoints dataset to this script using `inputs` parameter of the _ReinforcementLearningEstimator_.\n",
"In this section, we submit another job, to evalute a trained policy. The entrypoint for this job is\n",
"`cartpole-rollout.py` script, and we we pass the checkpoints dataset to this script as a dataset refrence.\n",
"\n",
"We are using script parameters to pass in the same algorithm and the same environment used during training. We also specify the checkpoint number of the checkpoint we wish to evaluate, `checkpoint-number`, and number of the steps we shall run the rollout, `steps`.\n",
"\n",
@@ -663,7 +651,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's configure rollout estimator. Note that we use the last checkpoint for evaluation. The assumption is that the last checkpoint points to our best trained agent. You may change this to any of the checkpoint numbers printed above and observe the effect."
"You can submit the training run using a `ScriptRunConfig`. By providing the\n",
"command to run the training, and a `RunConfig` object configured w"
]
},
{
@@ -672,94 +661,51 @@
"metadata": {},
"outputs": [],
"source": [
"script_params = { \n",
" # Checkpoint number of the checkpoint from which to roll out\n",
" \"--checkpoint-number\": last_checkpoint_number,\n",
"ray_environment_name = 'cartpole-ray-sc'\n",
"\n",
" # Training algorithm\n",
" \"--run\": training_algorithm,\n",
"experiment_name = 'CartPole-v0-SC'\n",
"training_algorithm = 'PPO'\n",
"rl_environment = 'CartPole-v0'\n",
"\n",
" # Training environment\n",
" \"--env\": rl_environment,\n",
" \n",
" # Algorithm-specific parameters\n",
" \"--config\": '{}',\n",
" \n",
" # Number of rollout steps \n",
" \"--steps\": 2000,\n",
" \n",
" # If should repress rendering of the environment\n",
" \"--no-render\": \"\",\n",
" \n",
" # The place where recorded videos will be stored\n",
" \"--video-dir\": \"./logs/video\"\n",
"}\n",
"experiment = Experiment(workspace=ws, name=experiment_name)\n",
"ray_environment = Environment.get(workspace=ws, name=ray_environment_name)\n",
"\n",
"script_name = 'cartpole_rollout.py'\n",
"video_capture = True\n",
"if video_capture:\n",
" script_params.pop(\"--no-render\")\n",
" script_arguments = ['--video-dir', './logs/video']\n",
"else:\n",
" script_params.pop(\"--video-dir\")\n",
" script_arguments = ['--no-render']\n",
"script_arguments = script_arguments + [\n",
" '--run', training_algorithm,\n",
" '--env', rl_environment,\n",
" '--config', '{}',\n",
" '--steps', '2000',\n",
" '--checkpoint-number', str(last_checkpoint_number),\n",
" '--artifacts-dataset', checkpoint_ds.as_named_input('artifacts_dataset'),\n",
" '--artifacts-path', checkpoint_ds.as_named_input('artifacts_path').as_mount()\n",
"]\n",
"\n",
"command = [\"python\", script_name, *script_arguments]\n",
"\n",
"# Ray's video capture support requires to run everything under a headless display driver called (xvfb).\n",
"# There are two parts to this:\n",
"\n",
"# 1. Use a custom docker file with proper instructions to install xvfb, ffmpeg, python-opengl\n",
"# and other dependencies.\n",
"# Note: Even when the rendering is off pyhton-opengl is needed.\n",
"\n",
"with open(\"files/docker/Dockerfile\", \"r\") as f:\n",
" dockerfile=f.read()\n",
"\n",
"xvfb_env = Environment(name='xvfb-vdisplay')\n",
"xvfb_env.docker.base_image = None\n",
"xvfb_env.docker.base_dockerfile = dockerfile\n",
" \n",
"# 2. Execute the Python process via the xvfb-run command to set up the headless display driver.\n",
"xvfb_env.python.user_managed_dependencies = True\n",
"if video_capture:\n",
" xvfb_env.python.interpreter_path = \"xvfb-run -s '-screen 0 640x480x16 -ac +extension GLX +render' python\"\n",
" command = [\"xvfb-run -s '-screen 0 640x480x16 -ac +extension GLX +render' \"] + command\n",
" run_config.environment_variables[\"SDL_VIDEODRIVER\"] = \"dummy\"\n",
"\n",
"run_config = RunConfiguration(communicator='OpenMpi')\n",
"run_config.target = compute_target\n",
"run_config.docker = DockerConfiguration(use_docker=True)\n",
"run_config.node_count = 1\n",
"run_config.environment = ray_environment\n",
"\n",
"rollout_estimator = ReinforcementLearningEstimator(\n",
" # Location of source files\n",
" source_directory='files',\n",
"rollout_config = ScriptRunConfig(\n",
" source_directory='./files',\n",
" command=command,\n",
" run_config=run_config\n",
" )\n",
"\n",
" # Python script file\n",
" entry_script='cartpole_rollout.py',\n",
" \n",
" # A dictionary of arguments to pass to the rollout script specified in ``entry_script``\n",
" script_params = script_params,\n",
" \n",
" # Data inputs\n",
" inputs=[\n",
" checkpoint_ds.as_named_input('artifacts_dataset'),\n",
" checkpoint_ds.as_named_input('artifacts_path').as_mount()],\n",
" \n",
" # The Azure Machine Learning compute target set up for Ray head nodes\n",
" compute_target=compute_target,\n",
" \n",
" # Reinforcement learning framework. Currently must be Ray.\n",
" rl_framework=Ray(),\n",
" \n",
" # Custom environmnet for Xvfb\n",
" environment=xvfb_env)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Same as before, we use the *rollout_estimator* to submit a run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rollout_run = exp.submit(rollout_estimator)"
"rollout_run = experiment.submit(rollout_config)\n",
"rollout_run"
]
},
{
@@ -811,11 +757,6 @@
"metadata": {},
"outputs": [],
"source": [
"# Get a handle to child run\n",
"child_runs = list(rollout_run.get_children())\n",
"print('Number of child runs:', len(child_runs))\n",
"child_run_0 = child_runs[0]\n",
"\n",
"# Download rollout artifacts\n",
"rollout_artifacts_path = path.join(\"logs\", \"rollout\")\n",
"print(\"Rollout artifacts path:\", rollout_artifacts_path)\n",
@@ -824,7 +765,7 @@
" dir_util.remove_tree(rollout_artifacts_path)\n",
"\n",
"# Download videos to local compute\n",
"child_run_0.download_files(\"logs/video\", output_directory = rollout_artifacts_path)"
"rollout_run.download_files(\"logs/video\", output_directory = rollout_artifacts_path)"
]
},
{
@@ -914,6 +855,16 @@
"name": "dasommer"
}
],
"categories": [
"how-to-use-azureml",
"reinforcement-learning"
],
"interpreter": {
"hash": "13382f70c1d0595120591d2e358c8d446daf961bf951d1fba9a32631e205d5ab"
},
"kernel_info": {
"name": "python38-azureml"
},
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -929,10 +880,13 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
"version": "3.7.9"
},
"notice": "Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License."
"notice": "Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.",
"nteract": {
"version": "nteract-front-end@1.0.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 0
}

View File

@@ -1,4 +1,3 @@
import argparse
import os
import sys
@@ -11,10 +10,7 @@ from azureml.core import Run
from utils import callbacks
DEFAULT_RAY_ADDRESS = 'localhost:6379'
def run_rollout(args, parser, ray_address):
def run_rollout(args, parser):
config = args.config
if not args.env:
@@ -22,8 +18,6 @@ def run_rollout(args, parser, ray_address):
parser.error("the following arguments are required: --env")
args.env = config.get("env")
ray.init(address=ray_address)
# Create the Trainer from config.
cls = get_trainable_cls(args.run)
agent = cls(env=args.env, config=config)
@@ -76,6 +70,10 @@ def run_rollout(args, parser, ray_address):
if __name__ == "__main__":
# Start ray head (single node)
os.system('ray start --head')
ray.init(address='auto')
# Add positional argument - serves as placeholder for checkpoint
argvc = sys.argv[1:]
argvc.insert(0, 'checkpoint-placeholder')
@@ -88,8 +86,12 @@ if __name__ == "__main__":
help='Checkpoint number of the checkpoint from which to roll out')
rollout_parser.add_argument(
'--ray-address', required=False, default=DEFAULT_RAY_ADDRESS,
help='The address of the Ray cluster to connect to')
'--artifacts-dataset', required=True,
help='The checkpoints artifacts dataset')
rollout_parser.add_argument(
'--artifacts-path', required=True,
help='The checkpoints artifacts path')
args = rollout_parser.parse_args(argvc)
@@ -116,4 +118,4 @@ if __name__ == "__main__":
args.checkpoint = checkpoint
# Start rollout
run_rollout(args, rollout_parser, args.ray_address)
run_rollout(args, rollout_parser)

View File

@@ -1,17 +1,10 @@
import argparse
import os
import sys
import ray
from ray.rllib import train
from ray import tune
from utils import callbacks
DEFAULT_RAY_ADDRESS = 'localhost:6379'
if __name__ == "__main__":
# Parse arguments and add callbacks to config
@@ -24,11 +17,9 @@ if __name__ == "__main__":
if 'monitor' in args.config and args.config['monitor']:
print("Video capturing is ON!")
# Start (connect to) Ray cluster
if args.ray_address is None:
args.ray_address = DEFAULT_RAY_ADDRESS
ray.init(address=args.ray_address)
# Start ray head (single node)
os.system('ray start --head')
ray.init(address='auto')
# Run training task using tune.run
tune.run(

View File

@@ -8,7 +8,8 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
rm -rf /var/lib/apt/lists/* && \
rm -rf /usr/share/man/*
RUN conda install -y conda=4.7.12 python=3.7 && conda clean -ay && \
RUN conda install -y conda=4.12.0 python=3.7 && conda clean -ay
RUN pip install ray-on-aml==0.1.6 & \
pip install --no-cache-dir \
azureml-defaults \
azureml-dataset-runtime[fuse,pandas] \
@@ -22,10 +23,12 @@ RUN conda install -y conda=4.7.12 python=3.7 && conda clean -ay && \
tabulate \
dm_tree \
lz4 \
ray==0.8.3 \
ray[rllib,dashboard,tune]==0.8.3 \
psutil \
setproctitle \
gym[atari] && \
pygame \
gym[classic_control]==0.19.0 && \
conda install -y -c conda-forge x264='1!152.20180717' ffmpeg=4.0.2 && \
conda install -c anaconda opencv
RUN pip install --upgrade ray==0.8.3 \
ray[rllib,dashboard,tune]==0.8.3

View File

@@ -8,7 +8,7 @@ from azureml.core import Run
def on_train_result(info):
'''Callback on train result to record metrics returned by trainer.
'''
run = Run.get_context().parent
run = Run.get_context()
run.log(
name='episode_reward_mean',
value=info["result"]["episode_reward_mean"])

View File

@@ -1,36 +1,7 @@
FROM mcr.microsoft.com/azureml/base:openmpi3.1.2-ubuntu18.04
# Install some basic utilities
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
sudo \
cpio \
git \
bzip2 \
libx11-6 \
tmux \
htop \
gcc \
xvfb \
python-opengl \
x11-xserver-utils \
ffmpeg \
mesa-utils \
nano \
vim \
rsync \
&& rm -rf /var/lib/apt/lists/*
# Install python 3.7
RUN conda install python==3.7
# Create a working directory
RUN mkdir /app
WORKDIR /app
FROM akdmsft/particle-cpu
# Install required pip packages
RUN pip install --upgrade pip setuptools && pip install --upgrade \
RUN pip3 install --upgrade pip setuptools && pip3 install --upgrade \
pandas \
matplotlib \
psutil \
@@ -43,18 +14,19 @@ RUN pip install --upgrade pip setuptools && pip install --upgrade \
tensorflow-probability==0.8.0 \
onnxruntime \
tf2onnx \
cloudpickle==1.2.0 \
cloudpickle==1.1.1 \
tabulate \
dm_tree \
lz4 \
opencv-python \
ray==0.8.3 \
ray[rllib]==0.8.3 \
ray[tune]==0.8.3
opencv-python
# Install particle
RUN git clone https://github.com/openai/multiagent-particle-envs.git
COPY patch_files/* multiagent-particle-envs/multiagent/
RUN cd multiagent-particle-envs && \
pip install -e . && \
pip install --upgrade pyglet==1.3.2
pip3 install -e . && \
pip3 install --upgrade pyglet==1.3.2
RUN pip3 install ray-on-aml==0.1.6
RUN pip3 install --upgrade \
ray==0.8.7 \
ray[rllib]==0.8.7 \
ray[tune]==0.8.7

View File

@@ -1,8 +1,7 @@
import argparse
import re
import os
import ray
from ray_on_aml.core import Ray_On_AML
from ray.tune import run_experiments
from ray.tune.registry import register_trainable, register_env, get_trainable_cls
import ray.rllib.contrib.maddpg.maddpg as maddpg
@@ -12,7 +11,8 @@ from util import parse_args
def setup_ray():
ray.init(address='auto')
ray_on_aml = Ray_On_AML()
ray_on_aml.getRay()
register_env('particle', env_creator)
@@ -120,5 +120,4 @@ if __name__ == '__main__':
'horizon': args.max_episode_len,
'video_frequency': args.checkpoint_freq,
}
train(args, env_config)

View File

@@ -60,8 +60,6 @@
" - [RL using Azure Machine Learning compute](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb)\n",
"- [Scaling RL training runs with Azure Machine Learning](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb)\n",
"\n",
"Advanced users might also be interested in [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.ipynb) demonstrating how to train a Minecraft RL agent in Azure Machine Learning.\n",
"\n",
"## Initialize resources\n",
"\n",
"All required Azure Machine Learning service resources for this tutorial can be set up from Jupyter. This includes:\n",
@@ -79,7 +77,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646249589452
}
},
"outputs": [],
"source": [
"import azureml.core\n",
@@ -98,7 +100,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646250284486
}
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
@@ -122,7 +128,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646250342411
}
},
"outputs": [],
"source": [
"from azureml.core import Experiment\n",
@@ -159,7 +169,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646250346756
}
},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
@@ -208,21 +222,33 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646257481631
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"import os\n",
"from azureml.core import Environment\n",
"import os\n",
"\n",
"cpu_particle_env = Environment(name='particle-cpu')\n",
"ray_environment_name = 'particle-cpu'\n",
"ray_environment_dockerfile_path = os.path.join(os.getcwd(), 'docker', 'cpu', 'Dockerfile')\n",
"ray_environment = Environment. \\\n",
" from_dockerfile(name=ray_environment_name, dockerfile=ray_environment_dockerfile_path). \\\n",
" register(workspace=ws)\n",
"ray_cpu_build_details = ray_environment.build(workspace=ws)\n",
"\n",
"cpu_particle_env.docker.enabled = True\n",
"cpu_particle_env.docker.base_image = 'akdmsft/particle-cpu'\n",
"cpu_particle_env.python.interpreter_path = 'xvfb-run -s \"-screen 0 640x480x16 -ac +extension GLX +render\" python'\n",
"\n",
"max_train_time = os.environ.get('AML_MAX_TRAIN_TIME_SECONDS', 2 * 60 * 60)\n",
"cpu_particle_env.environment_variables['AML_MAX_TRAIN_TIME_SECONDS'] = str(max_train_time)\n",
"cpu_particle_env.python.user_managed_dependencies = True"
"ray_cpu_build_details.wait_for_completion(show_output=True)"
]
},
{
@@ -237,30 +263,15 @@
"can find more information in the [common parameters](https://docs.ray.io/en/latest/rllib-training.html?highlight=multiagent#common-parameters)\n",
"documentation.\n",
"\n",
"For monitoring and understanding the training progress, one\n",
"of the training environments is wrapped in a [Gym monitor](https://github.com/openai/gym/blob/master/gym/wrappers/monitor.py)\n",
"which periodically captures videos - by default every 200 training\n",
"iterations.\n",
"\n",
"The stopping criteria are set such that the training run is\n",
"terminated after either a mean reward of -400 is observed, or\n",
"terminated after either a mean reward of -450 is observed, or\n",
"training has run for over 2 hours.\n",
"\n",
"### Submitting a training run\n",
"\n",
"Below, you create the training run using a `ReinforcementLearningEstimator`\n",
"object, which contains all the configuration parameters for this experiment:\n",
"\n",
"- `source_directory`: Contains the training script and helper files to be\n",
" copied onto the node.\n",
"- `entry_script`: The training script, described in more detail above.\n",
"- `script_params`: The command line arguments to pass to the entry script.\n",
"- `compute_target`: The compute target for training script execution.\n",
"- `environment`: The Azure Machine Learning environment definition for the node running the training.\n",
"- `max_run_duration_seconds`: The time after which to abort the run if it is still running.\n",
"\n",
"For more details, please take a look at the [online documentation](https://docs.microsoft.com/en-us/python/api/azureml-contrib-reinforcementlearning/?view=azure-ml-py)\n",
"for Azure Machine Learning service's reinforcement learning offering.\n",
"You can submit the training run using a `ScriptRunConfig`. By providing the\n",
"command to run the training, and a `RunConfig` object configured with your\n",
"compute target, number of nodes, and environment image to use.\n",
"\n",
"Note that you can use the same notebook and scripts to experiment with\n",
"different Particle environments. You can find a list of supported\n",
@@ -270,44 +281,71 @@
"In order to get the best training results, you can also adjust the\n",
"`--final-reward` parameter to determine when to stop training. A greater\n",
"reward means longer running time, but improved results. By default,\n",
"the final reward will be -400, which should show good progress after\n",
"the final reward will be -450, which should show good progress after\n",
"about one hour of run time.\n",
"\n",
"For this notebook, we use a single D3 nodes, giving us a total of 4 CPUs and\n",
"0 GPUs. One CPU is used by the MADDPG trainer, and an additional CPU is\n",
"consumed by the RLlib rollout worker. The other 2 CPUs are not used, though\n",
"smaller node types will run out of memory for this task.\n",
"\n",
"Lastly, the RunDetails widget displays information about the submitted RL\n",
"experiment, including a link to the Azure portal with more details."
"smaller node types will run out of memory for this task."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"gather": {
"logged": 1646275371701
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"from azureml.contrib.train.rl import ReinforcementLearningEstimator\n",
"from azureml.core import RunConfiguration, ScriptRunConfig, Experiment\n",
"from azureml.core.runconfig import DockerConfiguration, RunConfiguration\n",
"from azureml.widgets import RunDetails\n",
"\n",
"estimator = ReinforcementLearningEstimator(\n",
" source_directory='files',\n",
" entry_script='particle_train.py',\n",
" script_params={\n",
" '--scenario': 'simple_spread',\n",
" '--final-reward': -400\n",
" },\n",
" compute_target=cpu_cluster,\n",
" environment=cpu_particle_env,\n",
" max_run_duration_seconds=3 * 60 * 60\n",
")\n",
"experiment_name = 'particle-multiagent'\n",
"\n",
"train_run = exp.submit(config=estimator)\n",
"experiment = Experiment(workspace=ws, name=experiment_name)\n",
"\n",
"aml_run_config_ml = RunConfiguration(communicator='OpenMpi')\n",
"aml_run_config_ml.target = cpu_cluster\n",
"aml_run_config_ml.docker = DockerConfiguration(use_docker=True)\n",
"aml_run_config_ml.node_count = 1\n",
"aml_run_config_ml.environment = ray_environment\n",
"\n",
"config = ScriptRunConfig(source_directory='./files',\n",
" command=[\n",
" 'xvfb-run -s \"-screen 0 640x480x16 -ac +extension GLX +render\" python',\n",
" 'particle_train.py',\n",
" '--scenario', 'simple_spread',\n",
" '--final-reward', '-450'\n",
" ],\n",
" run_config = aml_run_config_ml\n",
" )\n",
"train_run = experiment.submit(config)\n",
"\n",
"RunDetails(train_run).show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Job cancellation\n",
"\n",
"You may cancel the job by uncommenting and running the cell below."
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -349,22 +387,10 @@
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"from azureml.tensorboard import Tensorboard\n",
"# from azureml.tensorboard import Tensorboard\n",
"\n",
"head_run = None\n",
"\n",
"timeout = 60\n",
"while timeout > 0 and head_run is None:\n",
" timeout -= 1\n",
" \n",
" try:\n",
" head_run = next(r for r in train_run.get_children() if r.id.endswith('head'))\n",
" except StopIteration:\n",
" time.sleep(1)\n",
"\n",
"tb = Tensorboard([head_run])\n",
"tb.start()"
"# tb = Tensorboard([train_run])\n",
"# tb.start()"
]
},
{
@@ -391,18 +417,18 @@
"metadata": {},
"outputs": [],
"source": [
"import tempfile\n",
"import os\n",
"from azureml.core import Dataset\n",
"from azureml.data.dataset_error_handling import DatasetValidationError\n",
"\n",
"from IPython.display import clear_output\n",
"from IPython.core.display import display, Video\n",
"\n",
"datastore = ws.get_default_datastore()\n",
"datastore = ws.datastores['workspaceartifactstore']\n",
"path_prefix = './tmp_videos'\n",
"\n",
"def download_latest_training_video(run, video_checkpoint_counter):\n",
" run_artifacts_path = os.path.join('azureml', run.id)\n",
" run_artifacts_path = os.path.join('ExperimentRun', f'dcid.{run.id}', 'logs', 'videos')\n",
" \n",
" try:\n",
" run_artifacts_ds = Dataset.File.from_files(datastore.path(os.path.join(run_artifacts_path, '**')))\n",
@@ -429,7 +455,7 @@
"\n",
"def render_video(vf):\n",
" clear_output(wait=True)\n",
" display(Video(data=vf, embed=True, html_attributes='loop autoplay width=50%'))"
" display(Video(data=vf, embed=True, html_attributes='loop autoplay controls width=50%'))"
]
},
{
@@ -438,13 +464,13 @@
"metadata": {},
"outputs": [],
"source": [
"import shutil\n",
"import shutil, time\n",
"\n",
"terminal_statuses = ['Canceled', 'Completed', 'Failed']\n",
"video_checkpoint_counter = 0\n",
"\n",
"while head_run.get_status() not in terminal_statuses:\n",
" video_file, video_checkpoint_counter = download_latest_training_video(head_run, video_checkpoint_counter)\n",
"while train_run.get_status() not in terminal_statuses:\n",
" video_file, video_checkpoint_counter = download_latest_training_video(train_run, video_checkpoint_counter)\n",
" if video_file is not None:\n",
" render_video(video_file)\n",
" \n",
@@ -504,6 +530,16 @@
"name": "andress"
}
],
"categories": [
"how-to-use-azureml",
"reinforcement-learning"
],
"interpreter": {
"hash": "13382f70c1d0595120591d2e358c8d446daf961bf951d1fba9a32631e205d5ab"
},
"kernel_info": {
"name": "python38-azureml"
},
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -519,10 +555,13 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
"version": "3.7.9"
},
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00afLicensed under the MIT License.\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00af "
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00c3\u0192\u00c2\u00a2\u00c3\u00a2\u00e2\u20ac\u0161\u00c2\u00ac\u00c3\u201a\u00c2\u00afLicensed under the MIT License.\u00c3\u0192\u00c2\u00a2\u00c3\u00a2\u00e2\u20ac\u0161\u00c2\u00ac\u00c3\u201a\u00c2\u00af ",
"nteract": {
"version": "nteract-front-end@1.0.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 0
}

View File

@@ -95,7 +95,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.41.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -183,8 +183,8 @@
"train_data, test_data = dataset.random_split(percentage=0.8, seed=223)\n",
"\n",
"# Drop ModelName\n",
"train_data = train_data.drop_columns(['ModelName'])\n",
"test_data = test_data.drop_columns(['ModelName'])\n",
"train_data = train_data.drop_columns(['ModelName', 'VendorName'])\n",
"test_data = test_data.drop_columns(['ModelName', 'VendorName'])\n",
"\n",
"# Register the train dataset with your workspace\n",
"train_data.register(workspace = ws, name = 'rai_machine_train_dataset',\n",
@@ -254,7 +254,6 @@
"featurization_config.blocked_transformers = ['LabelEncoder']\n",
"#featurization_config.drop_columns = ['MMIN']\n",
"featurization_config.add_column_purpose('MYCT', 'Numeric')\n",
"featurization_config.add_column_purpose('VendorName', 'CategoricalHash')\n",
"#default strategy mean, add transformer param for for 3 columns\n",
"featurization_config.add_transformer_params('Imputer', ['CACH'], {\"strategy\": \"median\"})\n",
"featurization_config.add_transformer_params('Imputer', ['CHMIN'], {\"strategy\": \"median\"})\n",
@@ -466,8 +465,6 @@
"metadata": {},
"outputs": [],
"source": [
"categorical_features = ['VendorName']\n",
"\n",
"model_analysis_config = ModelAnalysisConfig(\n",
" title=\"Model analysis\",\n",
" model=model,\n",
@@ -479,7 +476,6 @@
" target_column_name=label,\n",
" confidential_datastore_name=ws.get_default_datastore().name,\n",
" run_configuration=conda_run_config,\n",
" categorical_column_names=categorical_features\n",
")"
]
},

View File

@@ -8,5 +8,8 @@ dependencies:
- matplotlib
- azureml-dataset-runtime
- ipywidgets
- raiwidgets~=0.15.0
- raiwidgets~=0.17.0
- liac-arff
- packaging>=20.9
- itsdangerous==2.0.1
- markupsafe<2.1.0

View File

@@ -30,7 +30,7 @@ _categorical_columns = [
def fetch_census_dataset():
"""Fetch the Adult Census Dataset.
"""Fetch the Adult Census Dataset
This uses a particular URL for the Adult Census dataset. The code
is a simplified version of fetch_openml() in sklearn.
@@ -39,45 +39,25 @@ def fetch_census_dataset():
https://openml.org/data/v1/download/1595261.gz
(as of 2021-03-31)
"""
dataset_path = "1595261.gz"
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
file_stream = gzip.GzipFile(filename=dataset_path, mode='rb')
filename = "1595261.gz"
data_url = "https://rainotebookscdn.blob.core.windows.net/datasets/"
remaining_attempts = 5
sleep_duration = 10
while remaining_attempts > 0:
try:
urlretrieve(data_url + filename, filename)
http_stream = gzip.GzipFile(filename=filename, mode='rb')
with closing(http_stream):
with closing(file_stream):
def _stream_generator(response):
for line in response:
yield line.decode('utf-8')
stream = _stream_generator(http_stream)
stream = _stream_generator(file_stream)
data = arff.load(stream)
except Exception as exc: # noqa: B902
remaining_attempts -= 1
print("Error downloading dataset from {} ({} attempt(s) remaining)"
.format(data_url, remaining_attempts))
print(exc)
sleep(sleep_duration)
sleep_duration *= 2
continue
else:
# dataset successfully downloaded
break
else:
raise Exception("Could not retrieve dataset from {}.".format(data_url))
except Exception as exc:
raise Exception("Could not load dataset from {} with exception {}".format(dataset_path, exc))
attributes = OrderedDict(data['attributes'])
arff_columns = list(attributes)
raw_df = pd.DataFrame(data=data['data'], columns=arff_columns)
target_column_name = 'class'

View File

@@ -100,7 +100,7 @@
"\n",
"# Check core SDK version number\n",
"\n",
"print(\"This notebook was created using SDK version 1.37.0, you are currently running version\", azureml.core.VERSION)"
"print(\"This notebook was created using SDK version 1.41.0, you are currently running version\", azureml.core.VERSION)"
]
},
{
@@ -363,6 +363,43 @@
"run.log_image(name='Hyperbolic Tangent', plot=plt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Logging for when more Metric Names are required\n",
"\n",
"Limits on logging are internally enforced to ensure a smooth experience, however these can sometimes be limiting, particularly in terms of the limit on metric names.\n",
"\n",
"The \"Logging Vectors\" or \"Logging Tables\" examples previously can be expanded upon to use up to 15 columns to increase this limit, with the information still being presented in Run Details as a chart, and being directly comparable in experiment reports.\n",
"\n",
"**Note:** see [Azure Machine Learning Limits Documentation](https://aka.ms/azure-machine-learning-limits) for more information on service limits.\n",
"**Note:** tables logged into the run are expected to be relatively small. Logging very large tables into Azure ML can result in reduced performance. If you need to store large amounts of data associated with the run, you can write the data to file that will be uploaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"metricNames = [ \"Accuracy\", \"Precision\", \"Recall\" ]\n",
"columnNames = [ \"expected\", \"actual\", \"calculated\", \"inferred\", \"determined\", \"predicted\", \"forecast\", \"speculated\", \"assumed\", \"required\", \"intended\", \"deduced\", \"theorized\", \"hoped\", \"hypothesized\" ]\n",
"\n",
"for step in range(1000):\n",
" for metricName in metricNames:\n",
"\n",
" metricKeyValueDictionary={}\n",
" for column in columnNames:\n",
" metricKeyValueDictionary[column] = random.randrange(0, step + 1)\n",
"\n",
" run.log_row(\n",
" metricName,\n",
" \"Example row for metric \" + metricName,\n",
" **metricKeyValueDictionary)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -498,7 +535,6 @@
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.makedirs('files', exist_ok=True)\n",
"\n",
"for f in run.get_file_names():\n",

View File

@@ -235,7 +235,7 @@
"(myenv) $ az vm create --resource-group <resource_group_name> --name <some_vm_name> --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username <username> --admin-password <password> --generate-ssh-keys --authentication-type password\n",
"```\n",
"\n",
"**Note**: You can also use [this url](https://portal.azure.com/#create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal\n",
"**Note**: You can also use [this url](https://ms.portal.azure.com/#create/microsoft-dsvm.ubuntu-18041804) to create the VM using the Azure Portal\n",
"\n",
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you switch to a different port (such as 5022), you can specify the port number in the provisioning configuration object."
]
@@ -277,6 +277,9 @@
" ssh_port=22,\n",
" username='username',\n",
" private_key_file='./.ssh/id_rsa')\n",
" \n",
" attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, attach_config)\n",
" \n",
" attached_dsvm_compute.wait_for_completion(show_output=True)"
]
},

View File

@@ -184,24 +184,6 @@
"myenv.python.conda_dependencies=conda_dep"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Specify environment variables\n",
"\n",
"You can add environment variables to your environment. These then become available using ```os.environ.get``` in your training script."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"myenv.environment_variables = {\"MESSAGE\":\"Hello from Azure Machine Learning\"}"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -108,8 +108,8 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [auto-ml-continuous-retraining](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb) | | | | | | |
| [auto-ml-regression-model-proxy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/experimental/regression-model-proxy/auto-ml-regression-model-proxy.ipynb) | | | | | | |
| [auto-ml-forecasting-backtest-many-models](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb) | | | | | | |
| [auto-ml-forecasting-beer-remote](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb) | | | | | | |
| [auto-ml-forecasting-energy-demand](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) | | | | | | |
| [auto-ml-forecasting-github-dau](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb) | | | | | | |
| [auto-ml-forecasting-hierarchical-timeseries](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb) | | | | | | |
| [auto-ml-forecasting-many-models](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) | | | | | | |
| [auto-ml-forecasting-univariate-recipe-experiment-settings](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) | | | | | | |

View File

@@ -102,7 +102,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.37.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.41.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -19,7 +19,7 @@
"source": [
"# Quickstart: Train and deploy a model in Azure Machine Learning in 10 minutes\n",
"\n",
"In this quickstart, learn how to get started with Azure Machine Learning. You'll train an image classification model using the [MNIST](https://azure.microsoft.com/services/open-datasets/catalog/mnist/) dataset.\n",
"In this quickstart, learn how to get started with Azure Machine Learning. You'll train an image classification model using the [MNIST](https://docs.microsoft.com/azure/open-datasets/dataset-mnist) dataset.\n",
"\n",
"You'll learn how to:\n",
"\n",
@@ -280,7 +280,7 @@
"# get a curated environment\n",
"env = Environment.get(\n",
" workspace=ws, \n",
" name=\"AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference\",\n",
" name=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu\",\n",
" version=1\n",
")\n",
"env.inferencing_stack_version='latest'\n",

View File

@@ -21,7 +21,7 @@
"\n",
"In this quickstart, you learn how to submit a batch training job using the Python SDK. In this example, we submit the job to the 'local' machine (the compute instance you are running this notebook on). However, you can use exactly the same method to submit the job to different compute targets (for example, AKS, Azure Machine Learning Compute Cluster, Synapse, etc) by changing a single line of code. A full list of support compute targets can be viewed [here](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target). \n",
"\n",
"This quickstart trains a simple logistic regression using the [MNIST](https://azure.microsoft.com/services/open-datasets/catalog/mnist/) dataset and [scikit-learn](http://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing a number from 0 to 9. The goal is to create a multi-class classifier to identify the digit a given image represents. \n",
"This quickstart trains a simple logistic regression using the [MNIST](https://docs.microsoft.com/azure/open-datasets/dataset-mnist) dataset and [scikit-learn](http://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing a number from 0 to 9. The goal is to create a multi-class classifier to identify the digit a given image represents. \n",
"\n",
"You will learn how to:\n",
"\n",

View File

@@ -2,7 +2,7 @@ import argparse
import os
import numpy as np
import glob
import joblib
# import joblib
import mlflow
from sklearn.linear_model import LogisticRegression
@@ -30,8 +30,7 @@ X_train = (
os.path.join(data_folder, "**/train-images-idx3-ubyte.gz"), recursive=True
)[0],
False,
) /
255.0
) / 255.0
)
X_test = (
load_data(
@@ -39,8 +38,7 @@ X_test = (
os.path.join(data_folder, "**/t10k-images-idx3-ubyte.gz"), recursive=True
)[0],
False,
) /
255.0
) / 255.0
)
y_train = load_data(
glob.glob(

View File

@@ -17,7 +17,7 @@
"\n",
"In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning service (preview) in a Python Jupyter notebook. You can then use the notebook as a template to train your own machine learning model with your own data. This tutorial is **part one of a two-part tutorial series**. \n",
"\n",
"This tutorial trains a simple logistic regression using the [MNIST](https://azure.microsoft.com/services/open-datasets/catalog/mnist/) dataset and [scikit-learn](http://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing a number from 0 to 9. The goal is to create a multi-class classifier to identify the digit a given image represents. \n",
"This tutorial trains a simple logistic regression using the [MNIST](https://docs.microsoft.com/azure/open-datasets/dataset-mnist) dataset and [scikit-learn](http://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing a number from 0 to 9. The goal is to create a multi-class classifier to identify the digit a given image represents. \n",
"\n",
"Learn how to:\n",
"\n",