Files
MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/18a.auto-ml-forecasting.ipynb
2018-11-29 11:32:16 -05:00

398 lines
12 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML 18: Energy Demand Forecasting\n",
"\n",
"In this example, we show how AutoML can be used for energy demand forecasting.\n",
"\n",
"Make sure you have executed the [00.configuration](00.configuration.ipynb) before running this notebook.\n",
"\n",
"In this notebook you would see\n",
"1. Creating an Experiment in an existing Workspace\n",
"2. Instantiating AutoMLConfig with new task type \"forecasting\" for timeseries data training, and other timeseries related settings: for this dataset we use the basic one: \"time_column_name\" \n",
"3. Training the Model using local compute\n",
"4. Exploring the results\n",
"5. Testing the fitted model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Experiment\n",
"\n",
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"import pandas as pd\n",
"import numpy as np\n",
"import os\n",
"import logging\n",
"import warnings\n",
"warnings.showwarning = lambda *args, **kwargs: None\n",
"\n",
"\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl.run import AutoMLRun\n",
"from matplotlib import pyplot as plt\n",
"from matplotlib.pyplot import imshow\n",
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
"\n",
"# choose a name for the run history container in the workspace\n",
"experiment_name = 'automl-energydemandforecasting'\n",
"# project folder\n",
"project_folder = './sample_projects/automl-local-energydemandforecasting'\n",
"\n",
"experiment = Experiment(ws, experiment_name)\n",
"\n",
"output = {}\n",
"output['SDK version'] = azureml.core.VERSION\n",
"output['Subscription ID'] = ws.subscription_id\n",
"output['Workspace'] = ws.name\n",
"output['Resource Group'] = ws.resource_group\n",
"output['Location'] = ws.location\n",
"output['Project Directory'] = project_folder\n",
"output['Run History Name'] = experiment_name\n",
"pd.set_option('display.max_colwidth', -1)\n",
"pd.DataFrame(data=output, index=['']).T"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Read Data\n",
"Read energy demanding data from file, and preview data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = pd.read_csv(\"nyc_energy.csv\", parse_dates=['timeStamp'])\n",
"data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split the data to train and test\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"train = data[data['timeStamp'] < '2017-02-01']\n",
"test = data[data['timeStamp'] >= '2017-02-01']\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prepare the test data, we will feed X_test to the fitted model and get prediction"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_test = test.pop('demand').values\n",
"X_test = test"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split the train data to train and valid\n",
"\n",
"Use one month's data as valid data\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train = train[train['timeStamp'] < '2017-01-01']\n",
"X_valid = train[train['timeStamp'] >= '2017-01-01']\n",
"y_train = X_train.pop('demand').values\n",
"y_valid = X_valid.pop('demand').values\n",
"print(X_train.shape)\n",
"print(y_train.shape)\n",
"print(X_valid.shape)\n",
"print(y_valid.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiate Auto ML Config\n",
"\n",
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
"\n",
"|Property|Description|\n",
"|-|-|\n",
"|**task**|forecasting|\n",
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
"|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|\n",
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
"|**X**|(sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"|**X_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, n_features]|\n",
"|**y_valid**|Data used to evaluate a model in a iteration. (sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers. |\n",
"|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"time_column_name = 'timeStamp'\n",
"automl_settings = {\n",
" \"time_column_name\": time_column_name,\n",
"}\n",
"\n",
"\n",
"automl_config = AutoMLConfig(task = 'forecasting',\n",
" debug_log = 'automl_nyc_energy_errors.log',\n",
" primary_metric='normalized_root_mean_squared_error',\n",
" iterations = 10,\n",
" iteration_timeout_minutes = 5,\n",
" X = X_train,\n",
" y = y_train,\n",
" X_valid = X_valid,\n",
" y_valid = y_valid,\n",
" path=project_folder,\n",
" verbosity = logging.INFO,\n",
" **automl_settings)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training the Model\n",
"\n",
"You can call the submit method on the experiment object and pass the run configuration. For Local runs the execution is synchronous. Depending on the data and number of iterations this can run for while.\n",
"You will see the currently running iterations printing to the console."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"local_run = experiment.submit(automl_config, show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Retrieve the Best Model\n",
"Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run, fitted_model = local_run.get_output()\n",
"fitted_model.steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the Best Fitted Model\n",
"\n",
"Predict on training and test set, and calculate residual values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_pred = fitted_model.predict(X_test)\n",
"y_pred"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define a Check Data Function\n",
"\n",
"Remove the nan values from y_test to avoid error when calculate metrics "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def _check_calc_input(y_true, y_pred, rm_na=True):\n",
" \"\"\"\n",
" Check that 'y_true' and 'y_pred' are non-empty and\n",
" have equal length.\n",
"\n",
" :param y_true: Vector of actual values\n",
" :type y_true: array-like\n",
"\n",
" :param y_pred: Vector of predicted values\n",
" :type y_pred: array-like\n",
"\n",
" :param rm_na:\n",
" If rm_na=True, remove entries where y_true=NA and y_pred=NA.\n",
" :type rm_na: boolean\n",
"\n",
" :return:\n",
" Tuple (y_true, y_pred). if rm_na=True,\n",
" the returned vectors may differ from their input values.\n",
" :rtype: Tuple with 2 entries\n",
" \"\"\"\n",
" if len(y_true) != len(y_pred):\n",
" raise ValueError(\n",
" 'the true values and prediction values do not have equal length.')\n",
" elif len(y_true) == 0:\n",
" raise ValueError(\n",
" 'y_true and y_pred are empty.')\n",
" # if there is any non-numeric element in the y_true or y_pred,\n",
" # the ValueError exception will be thrown.\n",
" y_true = np.array(y_true).astype(float)\n",
" y_pred = np.array(y_pred).astype(float)\n",
" if rm_na:\n",
" # remove entries both in y_true and y_pred where at least\n",
" # one element in y_true or y_pred is missing\n",
" y_true_rm_na = y_true[~(np.isnan(y_true) | np.isnan(y_pred))]\n",
" y_pred_rm_na = y_pred[~(np.isnan(y_true) | np.isnan(y_pred))]\n",
" return (y_true_rm_na, y_pred_rm_na)\n",
" else:\n",
" return y_true, y_pred"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use the Check Data Function to remove the nan values from y_test to avoid error when calculate metrics "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y_test,y_pred = _check_calc_input(y_test,y_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Calculate metrics for the prediction\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"[Test Data] \\nRoot Mean squared error: %.2f\" % np.sqrt(mean_squared_error(y_test, y_pred)))\n",
"# Explained variance score: 1 is perfect prediction\n",
"print('mean_absolute_error score: %.2f' % mean_absolute_error(y_test, y_pred))\n",
"print('R2 score: %.2f' % r2_score(y_test, y_pred))\n",
"\n",
"\n",
"\n",
"# Plot outputs\n",
"%matplotlib notebook\n",
"test_pred = plt.scatter(y_test, y_pred, color='b')\n",
"test_test = plt.scatter(y_test, y_test, color='g')\n",
"plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n",
"plt.show()"
]
}
],
"metadata": {
"authors": [
{
"name": "xiaga"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}