mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 09:37:04 -05:00
Compare commits
11 Commits
azureml-sd
...
azureml-sd
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b9ef23ad4b | ||
|
|
7e2c1ca152 | ||
|
|
d096535e48 | ||
|
|
f80512a6db | ||
|
|
b54111620e | ||
|
|
8dd52ee2df | ||
|
|
6c629f1eda | ||
|
|
053efde8c9 | ||
|
|
5189691f06 | ||
|
|
fb900916e3 | ||
|
|
738347f3da |
@@ -103,7 +103,7 @@
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
564
contrib/fairness/fairlearn-azureml-mitigation.ipynb
Normal file
564
contrib/fairness/fairlearn-azureml-mitigation.ipynb
Normal file
@@ -0,0 +1,564 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Unfairness Mitigation with Fairlearn and Azure Machine Learning\n",
|
||||
"**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio**\n",
|
||||
"\n",
|
||||
"## Table of Contents\n",
|
||||
"\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Loading the Data](#LoadingData)\n",
|
||||
"1. [Training an Unmitigated Model](#UnmitigatedModel)\n",
|
||||
"1. [Mitigation with GridSearch](#Mitigation)\n",
|
||||
"1. [Uploading a Fairness Dashboard to Azure](#AzureUpload)\n",
|
||||
" 1. Registering models\n",
|
||||
" 1. Computing Fairness Metrics\n",
|
||||
" 1. Uploading to Azure\n",
|
||||
"1. [Conclusion](#Conclusion)\n",
|
||||
"\n",
|
||||
"<a id=\"Introduction\"></a>\n",
|
||||
"## Introduction\n",
|
||||
"This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).\n",
|
||||
"\n",
|
||||
"We will apply the [grid search algorithm](https://fairlearn.github.io/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n",
|
||||
"\n",
|
||||
"### Setup\n",
|
||||
"\n",
|
||||
"To use this notebook, an Azure Machine Learning workspace is required.\n",
|
||||
"Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.\n",
|
||||
"This notebook also requires the following packages:\n",
|
||||
"* `azureml-contrib-fairness`\n",
|
||||
"* `fairlearn==0.4.6`\n",
|
||||
"* `joblib`\n",
|
||||
"* `shap`\n",
|
||||
"\n",
|
||||
"Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install --upgrade scikit-learn>=0.22.1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"LoadingData\"></a>\n",
|
||||
"## Loading the Data\n",
|
||||
"We use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate\n",
|
||||
"from fairlearn.widget import FairlearnDashboard\n",
|
||||
"from sklearn import svm\n",
|
||||
"from sklearn.preprocessing import LabelEncoder, StandardScaler\n",
|
||||
"from sklearn.linear_model import LogisticRegression\n",
|
||||
"import pandas as pd\n",
|
||||
"import shap"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can now load and inspect the data from the `shap` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"X_raw, Y = shap.datasets.adult()\n",
|
||||
"X_raw[\"Race\"].value_counts().to_dict()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). We also separate out the Race column, but we will not perform any mitigation based on it. Finally, we perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"A = X_raw[['Sex','Race']]\n",
|
||||
"X = X_raw.drop(labels=['Sex', 'Race'],axis = 1)\n",
|
||||
"X = pd.get_dummies(X)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"le = LabelEncoder()\n",
|
||||
"Y = le.fit_transform(Y)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"With our data prepared, we can make the conventional split in to 'test' and 'train' subsets:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_raw, \n",
|
||||
" Y, \n",
|
||||
" A,\n",
|
||||
" test_size = 0.2,\n",
|
||||
" random_state=0,\n",
|
||||
" stratify=Y)\n",
|
||||
"\n",
|
||||
"# Work around indexing issue\n",
|
||||
"X_train = X_train.reset_index(drop=True)\n",
|
||||
"A_train = A_train.reset_index(drop=True)\n",
|
||||
"X_test = X_test.reset_index(drop=True)\n",
|
||||
"A_test = A_test.reset_index(drop=True)\n",
|
||||
"\n",
|
||||
"# Improve labels\n",
|
||||
"A_test.Sex.loc[(A_test['Sex'] == 0)] = 'female'\n",
|
||||
"A_test.Sex.loc[(A_test['Sex'] == 1)] = 'male'\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 0)] = 'Amer-Indian-Eskimo'\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 1)] = 'Asian-Pac-Islander'\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 2)] = 'Black'\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 3)] = 'Other'\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 4)] = 'White'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"UnmitigatedModel\"></a>\n",
|
||||
"## Training an Unmitigated Model\n",
|
||||
"\n",
|
||||
"So we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)\n",
|
||||
"\n",
|
||||
"unmitigated_predictor.fit(X_train, Y_train)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can view this model in the fairness dashboard, and see the disparities which appear:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],\n",
|
||||
" y_true=Y_test,\n",
|
||||
" y_pred={\"unmitigated\": unmitigated_predictor.predict(X_test)})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.\n",
|
||||
"\n",
|
||||
"Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"Mitigation\"></a>\n",
|
||||
"## Mitigation with GridSearch\n",
|
||||
"\n",
|
||||
"The `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.\n",
|
||||
"\n",
|
||||
"For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),\n",
|
||||
" constraints=DemographicParity(),\n",
|
||||
" grid_size=71)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.\n",
|
||||
"\n",
|
||||
"The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sweep.fit(X_train, Y_train,\n",
|
||||
" sensitive_features=A_train.Sex)\n",
|
||||
"\n",
|
||||
"predictors = sweep._predictors"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"errors, disparities = [], []\n",
|
||||
"for m in predictors:\n",
|
||||
" classifier = lambda X: m.predict(X)\n",
|
||||
" \n",
|
||||
" error = ErrorRate()\n",
|
||||
" error.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.Sex)\n",
|
||||
" disparity = DemographicParity()\n",
|
||||
" disparity.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.Sex)\n",
|
||||
" \n",
|
||||
" errors.append(error.gamma(classifier)[0])\n",
|
||||
" disparities.append(disparity.gamma(classifier).max())\n",
|
||||
" \n",
|
||||
"all_results = pd.DataFrame( {\"predictor\": predictors, \"error\": errors, \"disparity\": disparities})\n",
|
||||
"\n",
|
||||
"dominant_models_dict = dict()\n",
|
||||
"base_name_format = \"census_gs_model_{0}\"\n",
|
||||
"row_id = 0\n",
|
||||
"for row in all_results.itertuples():\n",
|
||||
" model_name = base_name_format.format(row_id)\n",
|
||||
" errors_for_lower_or_eq_disparity = all_results[\"error\"][all_results[\"disparity\"]<=row.disparity]\n",
|
||||
" if row.error <= errors_for_lower_or_eq_disparity.min():\n",
|
||||
" dominant_models_dict[model_name] = row.predictor\n",
|
||||
" row_id = row_id + 1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions_dominant = {\"census_unmitigated\": unmitigated_predictor.predict(X_test)}\n",
|
||||
"models_dominant = {\"census_unmitigated\": unmitigated_predictor}\n",
|
||||
"for name, predictor in dominant_models_dict.items():\n",
|
||||
" value = predictor.predict(X_test)\n",
|
||||
" predictions_dominant[name] = value\n",
|
||||
" models_dominant[name] = predictor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"FairlearnDashboard(sensitive_features=A_test, \n",
|
||||
" sensitive_feature_names=['Sex', 'Race'],\n",
|
||||
" y_true=Y_test.tolist(),\n",
|
||||
" y_pred=predictions_dominant)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"When using sex as the sensitive feature, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute \"sex\"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.\n",
|
||||
"\n",
|
||||
"By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"AzureUpload\"></a>\n",
|
||||
"## Uploading a Fairness Dashboard to Azure\n",
|
||||
"\n",
|
||||
"Uploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:\n",
|
||||
"1. Register the dominant models\n",
|
||||
"1. Precompute all the required metrics\n",
|
||||
"1. Upload to Azure\n",
|
||||
"\n",
|
||||
"Before that, we need to connect to Azure Machine Learning Studio:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace, Experiment, Model\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"ws.get_details()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"RegisterModels\"></a>\n",
|
||||
"### Registering Models\n",
|
||||
"\n",
|
||||
"The fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `<name>:<version>` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.makedirs('models', exist_ok=True)\n",
|
||||
"def register_model(name, model):\n",
|
||||
" print(\"Registering \", name)\n",
|
||||
" model_path = \"models/{0}.pkl\".format(name)\n",
|
||||
" joblib.dump(value=model, filename=model_path)\n",
|
||||
" registered_model = Model.register(model_path=model_path,\n",
|
||||
" model_name=name,\n",
|
||||
" workspace=ws)\n",
|
||||
" print(\"Registered \", registered_model.id)\n",
|
||||
" return registered_model.id\n",
|
||||
"\n",
|
||||
"model_name_id_mapping = dict()\n",
|
||||
"for name, model in models_dominant.items():\n",
|
||||
" m_id = register_model(name, model)\n",
|
||||
" model_name_id_mapping[name] = m_id"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, produce new predictions dictionaries, with the updated names:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions_dominant_ids = dict()\n",
|
||||
"for name, y_pred in predictions_dominant.items():\n",
|
||||
" predictions_dominant_ids[model_name_id_mapping[name]] = y_pred"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"PrecomputeMetrics\"></a>\n",
|
||||
"### Precomputing Metrics\n",
|
||||
"\n",
|
||||
"We create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sf = { 'sex': A_test.Sex, 'race': A_test.Race }\n",
|
||||
"\n",
|
||||
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"dash_dict = _create_group_metric_set(y_true=Y_test,\n",
|
||||
" predictions=predictions_dominant_ids,\n",
|
||||
" sensitive_features=sf,\n",
|
||||
" prediction_type='binary_classification')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"DashboardUpload\"></a>\n",
|
||||
"### Uploading the Dashboard\n",
|
||||
"\n",
|
||||
"Now, we import our `contrib` package which contains the routine to perform the upload:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can create an Experiment, then a Run, and upload our dashboard to it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"exp = Experiment(ws, \"Test_Fairlearn_GridSearch_Census_Demo\")\n",
|
||||
"print(exp)\n",
|
||||
"\n",
|
||||
"run = exp.start_logging()\n",
|
||||
"try:\n",
|
||||
" dashboard_title = \"Dominant Models from GridSearch\"\n",
|
||||
" upload_id = upload_dashboard_dictionary(run,\n",
|
||||
" dash_dict,\n",
|
||||
" dashboard_name=dashboard_title)\n",
|
||||
" print(\"\\nUploaded to id: {0}\\n\".format(upload_id))\n",
|
||||
"\n",
|
||||
" downloaded_dict = download_dashboard_by_upload_id(run, upload_id)\n",
|
||||
"finally:\n",
|
||||
" run.complete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The dashboard can be viewed in the Run Details page.\n",
|
||||
"\n",
|
||||
"Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(dash_dict == downloaded_dict)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"Conclusion\"></a>\n",
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"In this notebook we have demonstrated how to use the `GridSearch` algorithm from Fairlearn to generate a collection of models, and then present them in the fairness dashboard in Azure Machine Learning Studio. Please remember that this notebook has not attempted to discuss the many considerations which should be part of any approach to unfairness mitigation. The [Fairlearn website](http://fairlearn.github.io/) provides that discussion"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "riedgar"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
507
contrib/fairness/upload-fairness-dashboard.ipynb
Normal file
507
contrib/fairness/upload-fairness-dashboard.ipynb
Normal file
@@ -0,0 +1,507 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Upload a Fairness Dashboard to Azure Machine Learning Studio\n",
|
||||
"**This notebook shows how to generate and upload a fairness assessment dashboard from Fairlearn to AzureML Studio**\n",
|
||||
"\n",
|
||||
"## Table of Contents\n",
|
||||
"\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Loading the Data](#LoadingData)\n",
|
||||
"1. [Processing the Data](#ProcessingData)\n",
|
||||
"1. [Training Models](#TrainingModels)\n",
|
||||
"1. [Logging in to AzureML](#LoginAzureML)\n",
|
||||
"1. [Registering the Models](#RegisterModels)\n",
|
||||
"1. [Using the Fairlearn Dashboard](#LocalDashboard)\n",
|
||||
"1. [Uploading a Fairness Dashboard to Azure](#AzureUpload)\n",
|
||||
" 1. Computing Fairness Metrics\n",
|
||||
" 1. Uploading to Azure\n",
|
||||
"1. [Conclusion](#Conclusion)\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"<a id=\"Introduction\"></a>\n",
|
||||
"## Introduction\n",
|
||||
"\n",
|
||||
"In this notebook, we walk through a simple example of using the `azureml-contrib-fairness` package to upload a collection of fairness statistics for a fairness dashboard. It is an example of integrating the [open source Fairlearn package](https://www.github.com/fairlearn/fairlearn) with Azure Machine Learning. This is not an example of fairness analysis or mitigation - this notebook simply shows how to get a fairness dashboard into the Azure Machine Learning portal. We will load the data and train a couple of simple models. We will then use Fairlearn to generate data for a Fairness dashboard, which we can upload to Azure Machine Learning portal and view there.\n",
|
||||
"\n",
|
||||
"### Setup\n",
|
||||
"\n",
|
||||
"To use this notebook, an Azure Machine Learning workspace is required.\n",
|
||||
"Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.\n",
|
||||
"This notebook also requires the following packages:\n",
|
||||
"* `azureml-contrib-fairness`\n",
|
||||
"* `fairlearn==0.4.6`\n",
|
||||
"* `joblib`\n",
|
||||
"* `shap`\n",
|
||||
"\n",
|
||||
"Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install --upgrade scikit-learn>=0.22.1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"LoadingData\"></a>\n",
|
||||
"## Loading the Data\n",
|
||||
"We use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn import svm\n",
|
||||
"from sklearn.preprocessing import LabelEncoder, StandardScaler\n",
|
||||
"from sklearn.linear_model import LogisticRegression\n",
|
||||
"import pandas as pd\n",
|
||||
"import shap"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can load the data:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"X_raw, Y = shap.datasets.adult()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can take a look at some of the data. For example, the next cells shows the counts of the different races identified in the dataset:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(X_raw[\"Race\"].value_counts().to_dict())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"ProcessingData\"></a>\n",
|
||||
"## Processing the Data\n",
|
||||
"\n",
|
||||
"With the data loaded, we process it for our needs. First, we extract the sensitive features of interest into `A` (conventionally used in the literature) and put the rest of the feature data into `X`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"A = X_raw[['Sex','Race']]\n",
|
||||
"X = X_raw.drop(labels=['Sex', 'Race'],axis = 1)\n",
|
||||
"X = pd.get_dummies(X)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, we apply a standard set of scalings:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sc = StandardScaler()\n",
|
||||
"X_scaled = sc.fit_transform(X)\n",
|
||||
"X_scaled = pd.DataFrame(X_scaled, columns=X.columns)\n",
|
||||
"\n",
|
||||
"le = LabelEncoder()\n",
|
||||
"Y = le.fit_transform(Y)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, we can then split our data into training and test sets, and also make the labels on our test portion of `A` human-readable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_scaled, \n",
|
||||
" Y, \n",
|
||||
" A,\n",
|
||||
" test_size = 0.2,\n",
|
||||
" random_state=0,\n",
|
||||
" stratify=Y)\n",
|
||||
"\n",
|
||||
"# Work around indexing issue\n",
|
||||
"X_train = X_train.reset_index(drop=True)\n",
|
||||
"A_train = A_train.reset_index(drop=True)\n",
|
||||
"X_test = X_test.reset_index(drop=True)\n",
|
||||
"A_test = A_test.reset_index(drop=True)\n",
|
||||
"\n",
|
||||
"# Improve labels\n",
|
||||
"A_test.Sex.loc[(A_test['Sex'] == 0)] = 'female'\n",
|
||||
"A_test.Sex.loc[(A_test['Sex'] == 1)] = 'male'\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 0)] = 'Amer-Indian-Eskimo'\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 1)] = 'Asian-Pac-Islander'\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 2)] = 'Black'\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 3)] = 'Other'\n",
|
||||
"A_test.Race.loc[(A_test['Race'] == 4)] = 'White'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"TrainingModels\"></a>\n",
|
||||
"## Training Models\n",
|
||||
"\n",
|
||||
"We now train a couple of different models on our data. The `adult` census dataset is a classification problem - the goal is to predict whether a particular individual exceeds an income threshold. For the purpose of generating a dashboard to upload, it is sufficient to train two basic classifiers. First, a logistic regression classifier:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"lr_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)\n",
|
||||
"\n",
|
||||
"lr_predictor.fit(X_train, Y_train)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"And for comparison, a support vector classifier:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"svm_predictor = svm.SVC()\n",
|
||||
"\n",
|
||||
"svm_predictor.fit(X_train, Y_train)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"LoginAzureML\"></a>\n",
|
||||
"## Logging in to AzureML\n",
|
||||
"\n",
|
||||
"With our two classifiers trained, we can log into our AzureML workspace:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace, Experiment, Model\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"ws.get_details()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"RegisterModels\"></a>\n",
|
||||
"## Registering the Models\n",
|
||||
"\n",
|
||||
"Next, we register our models. By default, the subroutine which uploads the models checks that the names provided correspond to registered models in the workspace. We define a utility routine to do the registering:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.makedirs('models', exist_ok=True)\n",
|
||||
"def register_model(name, model):\n",
|
||||
" print(\"Registering \", name)\n",
|
||||
" model_path = \"models/{0}.pkl\".format(name)\n",
|
||||
" joblib.dump(value=model, filename=model_path)\n",
|
||||
" registered_model = Model.register(model_path=model_path,\n",
|
||||
" model_name=name,\n",
|
||||
" workspace=ws)\n",
|
||||
" print(\"Registered \", registered_model.id)\n",
|
||||
" return registered_model.id"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, we register the models. For convenience in subsequent method calls, we store the results in a dictionary, which maps the `id` of the registered model (a string in `name:version` format) to the predictor itself:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model_dict = {}\n",
|
||||
"\n",
|
||||
"lr_reg_id = register_model(\"fairness_linear_regression\", lr_predictor)\n",
|
||||
"model_dict[lr_reg_id] = lr_predictor\n",
|
||||
"svm_reg_id = register_model(\"fairness_svm\", svm_predictor)\n",
|
||||
"model_dict[svm_reg_id] = svm_predictor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"LocalDashboard\"></a>\n",
|
||||
"## Using the Fairlearn Dashboard\n",
|
||||
"\n",
|
||||
"We can now examine the fairness of the two models we have training, both as a function of race and (binary) sex. Before uploading the dashboard to the AzureML portal, we will first instantiate a local instance of the Fairlearn dashboard.\n",
|
||||
"\n",
|
||||
"Regardless of the viewing location, the dashboard is based on three things - the true values, the model predictions and the sensitive feature values. The dashboard can use predictions from multiple models and multiple sensitive features if desired (as we are doing here).\n",
|
||||
"\n",
|
||||
"Our first step is to generate a dictionary mapping the `id` of the registered model to the corresponding array of predictions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ys_pred = {}\n",
|
||||
"for n, p in model_dict.items():\n",
|
||||
" ys_pred[n] = p.predict(X_test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can examine these predictions in a locally invoked Fairlearn dashboard. This can be compared to the dashboard uploaded to the portal (in the next section):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from fairlearn.widget import FairlearnDashboard\n",
|
||||
"\n",
|
||||
"FairlearnDashboard(sensitive_features=A_test, \n",
|
||||
" sensitive_feature_names=['Sex', 'Race'],\n",
|
||||
" y_true=Y_test.tolist(),\n",
|
||||
" y_pred=ys_pred)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"AzureUpload\"></a>\n",
|
||||
"## Uploading a Fairness Dashboard to Azure\n",
|
||||
"\n",
|
||||
"Uploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. The required stages are therefore:\n",
|
||||
"1. Precompute all the required metrics\n",
|
||||
"1. Upload to Azure\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Computing Fairness Metrics\n",
|
||||
"We use Fairlearn to create a dictionary which contains all the data required to display a dashboard. This includes both the raw data (true values, predicted values and sensitive features), and also the fairness metrics. The API is similar to that used to invoke the Dashboard locally. However, there are a few minor changes to the API, and the type of problem being examined (binary classification, regression etc.) needs to be specified explicitly:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sf = { 'Race': A_test.Race, 'Sex': A_test.Sex }\n",
|
||||
"\n",
|
||||
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
|
||||
"\n",
|
||||
"dash_dict = _create_group_metric_set(y_true=Y_test,\n",
|
||||
" predictions=ys_pred,\n",
|
||||
" sensitive_features=sf,\n",
|
||||
" prediction_type='binary_classification')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `_create_group_metric_set()` method is currently underscored since its exact design is not yet final in Fairlearn."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Uploading to Azure\n",
|
||||
"\n",
|
||||
"We can now import the `azureml.contrib.fairness` package itself. We will round-trip the data, so there are two required subroutines:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, we can upload the generated dictionary to AzureML. The upload method requires a run, so we first create an experiment and a run. The uploaded dashboard can be seen on the corresponding Run Details page in AzureML Studio. For completeness, we also download the dashboard dictionary which we uploaded."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"exp = Experiment(ws, \"notebook-01\")\n",
|
||||
"print(exp)\n",
|
||||
"\n",
|
||||
"run = exp.start_logging()\n",
|
||||
"try:\n",
|
||||
" dashboard_title = \"Sample notebook upload\"\n",
|
||||
" upload_id = upload_dashboard_dictionary(run,\n",
|
||||
" dash_dict,\n",
|
||||
" dashboard_name=dashboard_title)\n",
|
||||
" print(\"\\nUploaded to id: {0}\\n\".format(upload_id))\n",
|
||||
"\n",
|
||||
" downloaded_dict = download_dashboard_by_upload_id(run, upload_id)\n",
|
||||
"finally:\n",
|
||||
" run.complete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(dash_dict == downloaded_dict)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"Conclusion\"></a>\n",
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"In this notebook we have demonstrated how to generate and upload a fairness dashboard to AzureML Studio. We have not discussed how to analyse the results and apply mitigations. Those topics will be covered elsewhere."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "riedgar"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.8"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -6,7 +6,7 @@ dependencies:
|
||||
- python>=3.5.2,<3.6.8
|
||||
- nb_conda
|
||||
- matplotlib==2.1.0
|
||||
- numpy>=1.16.0,<=1.16.2
|
||||
- numpy~=1.16.0
|
||||
- cython
|
||||
- urllib3<1.24
|
||||
- scipy==1.4.1
|
||||
|
||||
@@ -7,7 +7,7 @@ dependencies:
|
||||
- python>=3.5.2,<3.6.8
|
||||
- nb_conda
|
||||
- matplotlib==2.1.0
|
||||
- numpy>=1.16.0,<=1.16.2
|
||||
- numpy~=1.16.0
|
||||
- cython
|
||||
- urllib3<1.24
|
||||
- scipy==1.4.1
|
||||
|
||||
@@ -57,7 +57,7 @@
|
||||
"9. Test the ACI service.\n",
|
||||
"\n",
|
||||
"In addition this notebook showcases the following features\n",
|
||||
"- **Blacklisting** certain pipelines\n",
|
||||
"- **Blocking** certain pipelines\n",
|
||||
"- Specifying **target metrics** to indicate stopping criteria\n",
|
||||
"- Handling **missing data** in the input"
|
||||
]
|
||||
@@ -105,7 +105,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
@@ -314,8 +314,8 @@
|
||||
"|**task**|classification or regression or forecasting|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||
"|**blacklist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. <br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGD</i><br><i>MultinomialNaiveBayes</i><br><i>BernoulliNaiveBayes</i><br><i>SVM</i><br><i>LinearSVM</i><br><i>KNN</i><br><i>DecisionTree</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>GradientBoosting</i><br><i>TensorFlowDNN</i><br><i>TensorFlowLinearClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i><br><br>Allowed values for **Forecasting**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i><br><i>Arima</i><br><i>Prophet</i>|\n",
|
||||
"| **whitelist_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blacklist_models** allowed for **whitelist_models**.|\n",
|
||||
"|**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run. <br><br> Allowed values for **Classification**<br><i>LogisticRegression</i><br><i>SGD</i><br><i>MultinomialNaiveBayes</i><br><i>BernoulliNaiveBayes</i><br><i>SVM</i><br><i>LinearSVM</i><br><i>KNN</i><br><i>DecisionTree</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>GradientBoosting</i><br><i>TensorFlowDNN</i><br><i>TensorFlowLinearClassifier</i><br><br>Allowed values for **Regression**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i><br><br>Allowed values for **Forecasting**<br><i>ElasticNet</i><br><i>GradientBoosting</i><br><i>DecisionTree</i><br><i>KNN</i><br><i>LassoLars</i><br><i>SGD</i><br><i>RandomForest</i><br><i>ExtremeRandomTrees</i><br><i>LightGBM</i><br><i>TensorFlowLinearRegressor</i><br><i>TensorFlowDNN</i><br><i>Arima</i><br><i>Prophet</i>|\n",
|
||||
"|**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.|\n",
|
||||
"|**experiment_exit_score**| Value indicating the target for *primary_metric*. <br>Once the target is surpassed the run terminates.|\n",
|
||||
"|**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n",
|
||||
"|**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.|\n",
|
||||
@@ -349,7 +349,7 @@
|
||||
" debug_log = 'automl_errors.log',\n",
|
||||
" compute_target=compute_target,\n",
|
||||
" experiment_exit_score = 0.9984,\n",
|
||||
" blacklist_models = ['KNN','LinearSVM'],\n",
|
||||
" blocked_models = ['KNN','LinearSVM'],\n",
|
||||
" enable_onnx_compatible_models=True,\n",
|
||||
" training_data = train_data,\n",
|
||||
" label_column_name = label,\n",
|
||||
@@ -675,10 +675,8 @@
|
||||
"model_name = best_run.properties['model_name']\n",
|
||||
"\n",
|
||||
"script_file_name = 'inference/score.py'\n",
|
||||
"conda_env_file_name = 'inference/env.yml'\n",
|
||||
"\n",
|
||||
"best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')\n",
|
||||
"best_run.download_file('outputs/conda_env_v_1_0_0.yml', 'inference/env.yml')"
|
||||
"best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -721,8 +719,7 @@
|
||||
"from azureml.core.model import Model\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"\n",
|
||||
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=conda_env_file_name)\n",
|
||||
"inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)\n",
|
||||
"inference_config = InferenceConfig(entry_script=script_file_name)\n",
|
||||
"\n",
|
||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||
" memory_gb = 1, \n",
|
||||
|
||||
@@ -2,7 +2,3 @@ name: auto-ml-classification-bank-marketing-all-features
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
- onnxruntime==1.0.0
|
||||
|
||||
@@ -93,7 +93,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -2,6 +2,3 @@ name: auto-ml-classification-credit-card-fraud
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
|
||||
@@ -97,7 +97,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -2,11 +2,3 @@ name: auto-ml-classification-text-dnn
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
- https://download.pytorch.org/whl/cpu/torch-1.1.0-cp35-cp35m-win_amd64.whl
|
||||
- sentencepiece==0.1.82
|
||||
- pytorch-transformers==1.0
|
||||
- spacy==2.1.8
|
||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||
|
||||
@@ -88,7 +88,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
@@ -201,7 +201,7 @@
|
||||
"conda_run_config.environment.docker.enabled = True\n",
|
||||
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]', 'applicationinsights', 'azureml-opendatasets'], \n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]', 'applicationinsights', 'azureml-opendatasets', 'azureml-defaults'], \n",
|
||||
" conda_packages=['numpy==1.16.2'], \n",
|
||||
" pin_sdk_version=False)\n",
|
||||
"#cd.add_pip_package('azureml-explain-model')\n",
|
||||
|
||||
@@ -2,7 +2,3 @@ name: auto-ml-continuous-retraining
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
- azureml-pipeline
|
||||
|
||||
@@ -114,7 +114,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
@@ -352,8 +352,6 @@
|
||||
"|**label_column_name**|The name of the label column.|\n",
|
||||
"|**enable_dnn**|Enable Forecasting DNNs|\n",
|
||||
"\n",
|
||||
"This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results.\n",
|
||||
"\n",
|
||||
"This step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)."
|
||||
]
|
||||
},
|
||||
|
||||
@@ -1,11 +1,4 @@
|
||||
name: auto-ml-forecasting-beer-remote
|
||||
dependencies:
|
||||
- py-xgboost<=0.90
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- numpy==1.16.2
|
||||
- pandas==0.23.4
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
- azureml-train
|
||||
|
||||
@@ -87,7 +87,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
@@ -250,7 +250,7 @@
|
||||
"|-|-|\n",
|
||||
"|**task**|forecasting|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>\n",
|
||||
"|**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|\n",
|
||||
"|**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|\n",
|
||||
"|**experiment_timeout_hours**|Experimentation timeout in hours.|\n",
|
||||
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
||||
"|**label_column_name**|The name of the label column.|\n",
|
||||
@@ -263,7 +263,7 @@
|
||||
"|**target_lags**|The target_lags specifies how far back we will construct the lags of the target variable.|\n",
|
||||
"|**drop_column_names**|Name(s) of columns to drop prior to modeling|\n",
|
||||
"\n",
|
||||
"This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
|
||||
"This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -307,7 +307,7 @@
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(task='forecasting', \n",
|
||||
" primary_metric='normalized_root_mean_squared_error',\n",
|
||||
" blacklist_models = ['ExtremeRandomTrees'], \n",
|
||||
" blocked_models = ['ExtremeRandomTrees'], \n",
|
||||
" experiment_timeout_hours=0.3,\n",
|
||||
" training_data=train,\n",
|
||||
" label_column_name=target_column_name,\n",
|
||||
|
||||
@@ -1,10 +1,4 @@
|
||||
name: auto-ml-forecasting-bike-share
|
||||
dependencies:
|
||||
- py-xgboost<=0.90
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- numpy==1.16.2
|
||||
- pandas==0.23.4
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
|
||||
@@ -97,7 +97,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
@@ -303,7 +303,7 @@
|
||||
"|-|-|\n",
|
||||
"|**task**|forecasting|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|\n",
|
||||
"|**blacklist_models**|Models in blacklist won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|\n",
|
||||
"|**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|\n",
|
||||
"|**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.|\n",
|
||||
"|**training_data**|The training data to be used within the experiment.|\n",
|
||||
"|**label_column_name**|The name of the label column.|\n",
|
||||
@@ -318,7 +318,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
|
||||
"This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -334,7 +334,7 @@
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(task='forecasting', \n",
|
||||
" primary_metric='normalized_root_mean_squared_error',\n",
|
||||
" blacklist_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], \n",
|
||||
" blocked_models = ['ExtremeRandomTrees', 'AutoArima', 'Prophet'], \n",
|
||||
" experiment_timeout_hours=0.3,\n",
|
||||
" training_data=train,\n",
|
||||
" label_column_name=target_column_name,\n",
|
||||
@@ -560,7 +560,7 @@
|
||||
"### Using lags and rolling window features\n",
|
||||
"Now we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.\n",
|
||||
"\n",
|
||||
"This notebook uses the blacklist_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blacklist_models list but you may need to increase the iteration_timeout_minutes parameter value to get results."
|
||||
"This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -578,7 +578,7 @@
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(task='forecasting', \n",
|
||||
" primary_metric='normalized_root_mean_squared_error',\n",
|
||||
" blacklist_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blacklisted for tutorial purposes, remove this for real use cases. \n",
|
||||
" blocked_models = ['ElasticNet','ExtremeRandomTrees','GradientBoosting','XGBoostRegressor','ExtremeRandomTrees', 'AutoArima', 'Prophet'], #These models are blocked for tutorial purposes, remove this for real use cases. \n",
|
||||
" experiment_timeout_hours=0.3,\n",
|
||||
" training_data=train,\n",
|
||||
" label_column_name=target_column_name,\n",
|
||||
|
||||
@@ -2,8 +2,3 @@ name: auto-ml-forecasting-energy-demand
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- numpy==1.16.2
|
||||
- pandas==0.23.4
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
|
||||
@@ -94,7 +94,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -1,10 +1,4 @@
|
||||
name: auto-ml-forecasting-function
|
||||
dependencies:
|
||||
- py-xgboost<=0.90
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- numpy==1.16.2
|
||||
- pandas==0.23.4
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
|
||||
@@ -82,7 +82,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -1,10 +1,4 @@
|
||||
name: auto-ml-forecasting-orange-juice-sales
|
||||
dependencies:
|
||||
- py-xgboost<=0.90
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- numpy==1.16.2
|
||||
- pandas==0.23.4
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
|
||||
@@ -96,7 +96,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -2,6 +2,3 @@ name: auto-ml-classification-credit-card-fraud-local
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
|
||||
@@ -98,7 +98,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -2,6 +2,3 @@ name: auto-ml-regression-explanation-featurization
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
|
||||
@@ -92,7 +92,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -2,7 +2,3 @@ name: auto-ml-regression
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- pandas==0.23.4
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
|
||||
@@ -50,10 +50,12 @@ pip install azureml-accel-models[gpu]
|
||||
|
||||
### Step 4: Follow our notebooks
|
||||
|
||||
The notebooks in this repo walk through the following scenarios:
|
||||
* [Quickstart](accelerated-models-quickstart.ipynb), deploy and inference a ResNet50 model trained on ImageNet
|
||||
* [Object Detection](accelerated-models-object-detection.ipynb), deploy and inference an SSD-VGG model that can do object detection
|
||||
* [Training models](accelerated-models-training.ipynb), train one of our accelerated models on the Kaggle Cats and Dogs dataset to see how to improve accuracy on custom datasets
|
||||
We provide notebooks to walk through the following scenarios, linked below:
|
||||
* [Quickstart](https://github.com/Azure/MachineLearningNotebooks/blob/33d6def8c30d3dd3a5bfbea50b9c727788185faf/how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb), deploy and inference a ResNet50 model trained on ImageNet
|
||||
* [Object Detection](https://github.com/Azure/MachineLearningNotebooks/blob/33d6def8c30d3dd3a5bfbea50b9c727788185faf/how-to-use-azureml/deployment/accelerated-models/accelerated-models-object-detection.ipynb), deploy and inference an SSD-VGG model that can do object detection
|
||||
* [Training models](https://github.com/Azure/MachineLearningNotebooks/blob/33d6def8c30d3dd3a5bfbea50b9c727788185faf/how-to-use-azureml/deployment/accelerated-models/accelerated-models-training.ipynb), train one of our accelerated models on the Kaggle Cats and Dogs dataset to see how to improve accuracy on custom datasets
|
||||
|
||||
**Note**: the above notebooks work only for tensorflow >= 1.6,<2.0.
|
||||
|
||||
<a name="model-classes"></a>
|
||||
## Model Classes
|
||||
|
||||
Binary file not shown.
@@ -86,7 +86,37 @@
|
||||
"source": [
|
||||
"In this example, we will be using and registering two models. \n",
|
||||
"\n",
|
||||
"You wil need to have a `first_model.pkl` file and `second_model.pkl` file in the current directory. The below call registers the files as Models with the names `my_first_model` and `my_second_model` in the workspace."
|
||||
"First we will train two simple models on the [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) included with scikit-learn, serializing them to files in the current directory."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"import sklearn\n",
|
||||
"\n",
|
||||
"from sklearn.datasets import load_diabetes\n",
|
||||
"from sklearn.linear_model import BayesianRidge, Ridge\n",
|
||||
"\n",
|
||||
"x, y = load_diabetes(return_X_y=True)\n",
|
||||
"\n",
|
||||
"first_model = Ridge().fit(x, y)\n",
|
||||
"second_model = BayesianRidge().fit(x, y)\n",
|
||||
"\n",
|
||||
"joblib.dump(first_model, \"first_model.pkl\")\n",
|
||||
"joblib.dump(second_model, \"second_model.pkl\")\n",
|
||||
"\n",
|
||||
"print(\"Trained models using scikit-learn {}.\".format(sklearn.__version__))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that we have our trained models locally, we will register them as Models with the names `my_first_model` and `my_second_model` in the workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -102,12 +132,12 @@
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"my_model_1 = Model.register(model_path=\"first_model.pkl\",\n",
|
||||
" model_name=\"my_first_model\",\n",
|
||||
" workspace=ws)\n",
|
||||
" model_name=\"my_first_model\",\n",
|
||||
" workspace=ws)\n",
|
||||
"\n",
|
||||
"my_model_2 = Model.register(model_path=\"second_model.pkl\",\n",
|
||||
" model_name=\"my_second_model\",\n",
|
||||
" workspace=ws)"
|
||||
" model_name=\"my_second_model\",\n",
|
||||
" workspace=ws)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -149,25 +179,24 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import pickle\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"from sklearn.externals import joblib\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model_1, model_2\n",
|
||||
" # note here \"my_first_model\" is the name of the model registered under the workspace\n",
|
||||
" # this call should return the path to the model.pkl file on the local disk.\n",
|
||||
" # Here \"my_first_model\" is the name of the model registered under the workspace.\n",
|
||||
" # This call will return the path to the .pkl file on the local disk.\n",
|
||||
" model_1_path = Model.get_model_path(model_name='my_first_model')\n",
|
||||
" model_2_path = Model.get_model_path(model_name='my_second_model')\n",
|
||||
" \n",
|
||||
" # deserialize the model files back into a sklearn model\n",
|
||||
" # Deserialize the model files back into scikit-learn models.\n",
|
||||
" model_1 = joblib.load(model_1_path)\n",
|
||||
" model_2 = joblib.load(model_2_path)\n",
|
||||
"\n",
|
||||
"# note you can pass in multiple rows for scoring\n",
|
||||
"# Note you can pass in multiple rows for scoring.\n",
|
||||
"def run(raw_data):\n",
|
||||
" try:\n",
|
||||
" data = json.loads(raw_data)['data']\n",
|
||||
@@ -177,7 +206,7 @@
|
||||
" result_1 = model_1.predict(data)\n",
|
||||
" result_2 = model_2.predict(data)\n",
|
||||
"\n",
|
||||
" # you can return any data type as long as it is JSON-serializable\n",
|
||||
" # You can return any JSON-serializable value.\n",
|
||||
" return {\"prediction1\": result_1.tolist(), \"prediction2\": result_2.tolist()}\n",
|
||||
" except Exception as e:\n",
|
||||
" result = str(e)\n",
|
||||
@@ -208,10 +237,10 @@
|
||||
"source": [
|
||||
"from azureml.core import Environment\n",
|
||||
"\n",
|
||||
"env = Environment.from_conda_specification(name='deploytocloudenv', file_path='myenv.yml')\n",
|
||||
"\n",
|
||||
"# This is optional at this point\n",
|
||||
"# env.register(workspace=ws)"
|
||||
"env = Environment(\"deploytocloudenv\")\n",
|
||||
"env.python.conda_dependencies.add_pip_package(\"joblib\")\n",
|
||||
"env.python.conda_dependencies.add_pip_package(\"numpy\")\n",
|
||||
"env.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -281,25 +310,15 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import AciWebservice, Webservice\n",
|
||||
"from azureml.exceptions import WebserviceException\n",
|
||||
"from azureml.core.webservice import AciWebservice\n",
|
||||
"\n",
|
||||
"aci_service_name = \"aciservice-multimodel\"\n",
|
||||
"\n",
|
||||
"deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
|
||||
"aci_service_name = 'aciservice-multimodel'\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" # if you want to get existing service below is the command\n",
|
||||
" # since aci name needs to be unique in subscription deleting existing aci if any\n",
|
||||
" # we use aci_service_name to create azure aci\n",
|
||||
" service = Webservice(ws, name=aci_service_name)\n",
|
||||
" if service:\n",
|
||||
" service.delete()\n",
|
||||
"except WebserviceException as e:\n",
|
||||
" print()\n",
|
||||
"\n",
|
||||
"service = Model.deploy(ws, aci_service_name, [my_model_1, my_model_2], inference_config, deployment_config)\n",
|
||||
"\n",
|
||||
"service = Model.deploy(ws, aci_service_name, [my_model_1, my_model_2], inference_config, deployment_config, overwrite=True)\n",
|
||||
"service.wait_for_deployment(True)\n",
|
||||
"\n",
|
||||
"print(service.state)"
|
||||
]
|
||||
},
|
||||
@@ -317,13 +336,11 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"test_sample = json.dumps({'data': [\n",
|
||||
" [1,2,3,4,5,6,7,8,9,10], \n",
|
||||
" [10,9,8,7,6,5,4,3,2,1]\n",
|
||||
"]})\n",
|
||||
"\n",
|
||||
"test_sample_encoded = bytes(test_sample, encoding='utf8')\n",
|
||||
"prediction = service.run(input_data=test_sample_encoded)\n",
|
||||
"test_sample = json.dumps({'data': x[0:2].tolist()})\n",
|
||||
"\n",
|
||||
"prediction = service.run(test_sample)\n",
|
||||
"\n",
|
||||
"print(prediction)"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -2,3 +2,5 @@ name: multi-model-register-and-deploy
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- numpy
|
||||
- scikit-learn
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
name: project_environment
|
||||
dependencies:
|
||||
- python=3.6.2
|
||||
- pip:
|
||||
- azureml-defaults
|
||||
- scikit-learn
|
||||
- numpy
|
||||
- inference-schema[numpy-support]
|
||||
Binary file not shown.
@@ -1,442 +0,0 @@
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,6.169620651868849837e-02,2.187235499495579841e-02,-4.422349842444640161e-02,-3.482076283769860309e-02,-4.340084565202689815e-02,-2.592261998182820038e-03,1.990842087631829876e-02,-1.764612515980519894e-02
|
||||
-1.882016527791040067e-03,-4.464163650698899782e-02,-5.147406123880610140e-02,-2.632783471735180084e-02,-8.448724111216979540e-03,-1.916333974822199970e-02,7.441156407875940126e-02,-3.949338287409189657e-02,-6.832974362442149896e-02,-9.220404962683000083e-02
|
||||
8.529890629667830071e-02,5.068011873981870252e-02,4.445121333659410312e-02,-5.670610554934250001e-03,-4.559945128264750180e-02,-3.419446591411950259e-02,-3.235593223976569732e-02,-2.592261998182820038e-03,2.863770518940129874e-03,-2.593033898947460017e-02
|
||||
-8.906293935226029801e-02,-4.464163650698899782e-02,-1.159501450521270051e-02,-3.665644679856060184e-02,1.219056876180000040e-02,2.499059336410210108e-02,-3.603757004385269719e-02,3.430885887772629900e-02,2.269202256674450122e-02,-9.361911330135799444e-03
|
||||
5.383060374248070309e-03,-4.464163650698899782e-02,-3.638469220447349689e-02,2.187235499495579841e-02,3.934851612593179802e-03,1.559613951041610019e-02,8.142083605192099172e-03,-2.592261998182820038e-03,-3.199144494135589684e-02,-4.664087356364819692e-02
|
||||
-9.269547780327989928e-02,-4.464163650698899782e-02,-4.069594049999709917e-02,-1.944209332987930153e-02,-6.899064987206669775e-02,-7.928784441181220555e-02,4.127682384197570165e-02,-7.639450375000099436e-02,-4.118038518800790082e-02,-9.634615654166470144e-02
|
||||
-4.547247794002570037e-02,5.068011873981870252e-02,-4.716281294328249912e-02,-1.599922263614299983e-02,-4.009563984984299695e-02,-2.480001206043359885e-02,7.788079970179680352e-04,-3.949338287409189657e-02,-6.291294991625119570e-02,-3.835665973397880263e-02
|
||||
6.350367559056099842e-02,5.068011873981870252e-02,-1.894705840284650021e-03,6.662967401352719310e-02,9.061988167926439408e-02,1.089143811236970016e-01,2.286863482154040048e-02,1.770335448356720118e-02,-3.581672810154919867e-02,3.064409414368320182e-03
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,6.169620651868849837e-02,-4.009931749229690007e-02,-1.395253554402150001e-02,6.201685656730160021e-03,-2.867429443567860031e-02,-2.592261998182820038e-03,-1.495647502491130078e-02,1.134862324403770016e-02
|
||||
-7.090024709716259699e-02,-4.464163650698899782e-02,3.906215296718960200e-02,-3.321357610482440076e-02,-1.257658268582039982e-02,-3.450761437590899733e-02,-2.499265663159149983e-02,-2.592261998182820038e-03,6.773632611028609918e-02,-1.350401824497050006e-02
|
||||
-9.632801625429950054e-02,-4.464163650698899782e-02,-8.380842345523309422e-02,8.100872220010799790e-03,-1.033894713270950005e-01,-9.056118903623530669e-02,-1.394774321933030074e-02,-7.639450375000099436e-02,-6.291294991625119570e-02,-3.421455281914410201e-02
|
||||
2.717829108036539862e-02,5.068011873981870252e-02,1.750591148957160101e-02,-3.321357610482440076e-02,-7.072771253015849857e-03,4.597154030400080194e-02,-6.549067247654929980e-02,7.120997975363539678e-02,-9.643322289178400675e-02,-5.906719430815229877e-02
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,-2.884000768730720157e-02,-9.113481248670509197e-03,-4.320865536613589623e-03,-9.768885894535990141e-03,4.495846164606279866e-02,-3.949338287409189657e-02,-3.075120986455629965e-02,-4.249876664881350324e-02
|
||||
5.383060374248070309e-03,5.068011873981870252e-02,-1.894705840284650021e-03,8.100872220010799790e-03,-4.320865536613589623e-03,-1.571870666853709964e-02,-2.902829807069099918e-03,-2.592261998182820038e-03,3.839324821169769891e-02,-1.350401824497050006e-02
|
||||
4.534098333546320025e-02,-4.464163650698899782e-02,-2.560657146566450160e-02,-1.255635194240680048e-02,1.769438019460449832e-02,-6.128357906048329537e-05,8.177483968693349814e-02,-3.949338287409189657e-02,-3.199144494135589684e-02,-7.563562196749110123e-02
|
||||
-5.273755484206479882e-02,5.068011873981870252e-02,-1.806188694849819934e-02,8.040115678847230274e-02,8.924392882106320368e-02,1.076617872765389949e-01,-3.971920784793980114e-02,1.081111006295440019e-01,3.605579008983190309e-02,-4.249876664881350324e-02
|
||||
-5.514554978810590376e-03,-4.464163650698899782e-02,4.229558918883229851e-02,4.941532054484590319e-02,2.457414448561009990e-02,-2.386056667506489953e-02,7.441156407875940126e-02,-3.949338287409189657e-02,5.227999979678119719e-02,2.791705090337660150e-02
|
||||
7.076875249260000666e-02,5.068011873981870252e-02,1.211685112016709989e-02,5.630106193231849965e-02,3.420581449301800248e-02,4.941617338368559792e-02,-3.971920784793980114e-02,3.430885887772629900e-02,2.736770754260900093e-02,-1.077697500466389974e-03
|
||||
-3.820740103798660192e-02,-4.464163650698899782e-02,-1.051720243133190055e-02,-3.665644679856060184e-02,-3.734373413344069942e-02,-1.947648821001150138e-02,-2.867429443567860031e-02,-2.592261998182820038e-03,-1.811826730789670159e-02,-1.764612515980519894e-02
|
||||
-2.730978568492789874e-02,-4.464163650698899782e-02,-1.806188694849819934e-02,-4.009931749229690007e-02,-2.944912678412469915e-03,-1.133462820348369975e-02,3.759518603788870178e-02,-3.949338287409189657e-02,-8.944018957797799166e-03,-5.492508739331759815e-02
|
||||
-4.910501639104519755e-02,-4.464163650698899782e-02,-5.686312160821060252e-02,-4.354218818603310115e-02,-4.559945128264750180e-02,-4.327577130601600180e-02,7.788079970179680352e-04,-3.949338287409189657e-02,-1.190068480150809939e-02,1.549073015887240078e-02
|
||||
-8.543040090124079389e-02,5.068011873981870252e-02,-2.237313524402180162e-02,1.215130832538269907e-03,-3.734373413344069942e-02,-2.636575436938120090e-02,1.550535921336619952e-02,-3.949338287409189657e-02,-7.212845460195599356e-02,-1.764612515980519894e-02
|
||||
-8.543040090124079389e-02,-4.464163650698899782e-02,-4.050329988046450294e-03,-9.113481248670509197e-03,-2.944912678412469915e-03,7.767427965677820186e-03,2.286863482154040048e-02,-3.949338287409189657e-02,-6.117659509433449883e-02,-1.350401824497050006e-02
|
||||
4.534098333546320025e-02,5.068011873981870252e-02,6.061839444480759953e-02,3.105334362634819961e-02,2.870200306021350109e-02,-4.734670130927989828e-02,-5.444575906428809897e-02,7.120997975363539678e-02,1.335989800130079896e-01,1.356118306890790048e-01
|
||||
-6.363517019512339445e-02,-4.464163650698899782e-02,3.582871674554689856e-02,-2.288496402361559975e-02,-3.046396984243510131e-02,-1.885019128643240088e-02,-6.584467611156170040e-03,-2.592261998182820038e-03,-2.595242443518940012e-02,-5.492508739331759815e-02
|
||||
-6.726770864614299572e-02,5.068011873981870252e-02,-1.267282657909369996e-02,-4.009931749229690007e-02,-1.532848840222260020e-02,4.635943347782499856e-03,-5.812739686837520292e-02,3.430885887772629900e-02,1.919903307856710151e-02,-3.421455281914410201e-02
|
||||
-1.072256316073579990e-01,-4.464163650698899782e-02,-7.734155101194770121e-02,-2.632783471735180084e-02,-8.962994274508359616e-02,-9.619786134844690584e-02,2.655027262562750096e-02,-7.639450375000099436e-02,-4.257210492279420166e-02,-5.219804415301099697e-03
|
||||
-2.367724723390840155e-02,-4.464163650698899782e-02,5.954058237092670069e-02,-4.009931749229690007e-02,-4.284754556624519733e-02,-4.358891976780549654e-02,1.182372140927919965e-02,-3.949338287409189657e-02,-1.599826775813870117e-02,4.034337164788070335e-02
|
||||
5.260606023750229870e-02,-4.464163650698899782e-02,-2.129532317014089932e-02,-7.452802442965950069e-02,-4.009563984984299695e-02,-3.763909899380440266e-02,-6.584467611156170040e-03,-3.949338287409189657e-02,-6.092541861022970299e-04,-5.492508739331759815e-02
|
||||
6.713621404158050254e-02,5.068011873981870252e-02,-6.205954135808240159e-03,6.318680331979099896e-02,-4.284754556624519733e-02,-9.588471288665739722e-02,5.232173725423699961e-02,-7.639450375000099436e-02,5.942380044479410317e-02,5.276969239238479825e-02
|
||||
-6.000263174410389727e-02,-4.464163650698899782e-02,4.445121333659410312e-02,-1.944209332987930153e-02,-9.824676969418109224e-03,-7.576846662009279788e-03,2.286863482154040048e-02,-3.949338287409189657e-02,-2.712864555432650121e-02,-9.361911330135799444e-03
|
||||
-2.367724723390840155e-02,-4.464163650698899782e-02,-6.548561819925780014e-02,-8.141376581713200000e-02,-3.871968699164179961e-02,-5.360967054507050078e-02,5.968501286241110343e-02,-7.639450375000099436e-02,-3.712834601047360072e-02,-4.249876664881350324e-02
|
||||
3.444336798240450054e-02,5.068011873981870252e-02,1.252871188776620015e-01,2.875809638242839833e-02,-5.385516843185429725e-02,-1.290037051243130006e-02,-1.023070505174200062e-01,1.081111006295440019e-01,2.714857279071319972e-04,2.791705090337660150e-02
|
||||
3.081082953138499989e-02,-4.464163650698899782e-02,-5.039624916492520257e-02,-2.227739861197989939e-03,-4.422349842444640161e-02,-8.993489211265630334e-02,1.185912177278039964e-01,-7.639450375000099436e-02,-1.811826730789670159e-02,3.064409414368320182e-03
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,-6.332999405149600247e-02,-5.731367096097819691e-02,-5.798302700645770191e-02,-4.891244361822749687e-02,8.142083605192099172e-03,-3.949338287409189657e-02,-5.947269741072230137e-02,-6.735140813782170000e-02
|
||||
4.897352178648269744e-02,5.068011873981870252e-02,-3.099563183506899924e-02,-4.928030602040309877e-02,4.934129593323050011e-02,-4.132213582324419619e-03,1.333177689441520097e-01,-5.351580880693729975e-02,2.131084656824479978e-02,1.963283707370720027e-02
|
||||
1.264813727628719998e-02,-4.464163650698899782e-02,2.289497185897609866e-02,5.285819123858220142e-02,8.062710187196569719e-03,-2.855779360190789998e-02,3.759518603788870178e-02,-3.949338287409189657e-02,5.472400334817909689e-02,-2.593033898947460017e-02
|
||||
-9.147093429830140468e-03,-4.464163650698899782e-02,1.103903904628619932e-02,-5.731367096097819691e-02,-2.496015840963049931e-02,-4.296262284422640298e-02,3.023191042971450082e-02,-3.949338287409189657e-02,1.703713241477999851e-02,-5.219804415301099697e-03
|
||||
-1.882016527791040067e-03,5.068011873981870252e-02,7.139651518361660176e-02,9.761551025715360652e-02,8.786797596286209655e-02,7.540749571221680436e-02,-2.131101882750449997e-02,7.120997975363539678e-02,7.142403278057639360e-02,2.377494398854190089e-02
|
||||
-1.882016527791040067e-03,5.068011873981870252e-02,1.427247526792889930e-02,-7.452802442965950069e-02,2.558898754392050119e-03,6.201685656730160021e-03,-1.394774321933030074e-02,-2.592261998182820038e-03,1.919903307856710151e-02,3.064409414368320182e-03
|
||||
5.383060374248070309e-03,5.068011873981870252e-02,-8.361578283570040432e-03,2.187235499495579841e-02,5.484510736603499803e-02,7.321545647968999426e-02,-2.499265663159149983e-02,3.430885887772629900e-02,1.255315281338930007e-02,9.419076154073199869e-02
|
||||
-9.996055470531900466e-02,-4.464163650698899782e-02,-6.764124234701959781e-02,-1.089567313670219972e-01,-7.449446130487119566e-02,-7.271172671423199729e-02,1.550535921336619952e-02,-3.949338287409189657e-02,-4.986846773523059828e-02,-9.361911330135799444e-03
|
||||
-6.000263174410389727e-02,5.068011873981870252e-02,-1.051720243133190055e-02,-1.485159908304049987e-02,-4.972730985725089953e-02,-2.354741821327540133e-02,-5.812739686837520292e-02,1.585829843977170153e-02,-9.918957363154769225e-03,-3.421455281914410201e-02
|
||||
1.991321417832630017e-02,-4.464163650698899782e-02,-2.345094731790270046e-02,-7.108515373592319553e-02,2.044628591100669870e-02,-1.008203435632550049e-02,1.185912177278039964e-01,-7.639450375000099436e-02,-4.257210492279420166e-02,7.348022696655839847e-02
|
||||
4.534098333546320025e-02,5.068011873981870252e-02,6.816307896197400240e-02,8.100872220010799790e-03,-1.670444126042380101e-02,4.635943347782499856e-03,-7.653558588881050062e-02,7.120997975363539678e-02,3.243322577960189995e-02,-1.764612515980519894e-02
|
||||
2.717829108036539862e-02,5.068011873981870252e-02,-3.530688013059259805e-02,3.220096707616459941e-02,-1.120062982761920074e-02,1.504458729887179960e-03,-1.026610541524320026e-02,-2.592261998182820038e-03,-1.495647502491130078e-02,-5.078298047848289754e-02
|
||||
-5.637009329308430294e-02,-4.464163650698899782e-02,-1.159501450521270051e-02,-3.321357610482440076e-02,-4.697540414084860200e-02,-4.765984977106939996e-02,4.460445801105040325e-03,-3.949338287409189657e-02,-7.979397554541639223e-03,-8.806194271199530021e-02
|
||||
-7.816532399920170238e-02,-4.464163650698899782e-02,-7.303030271642410587e-02,-5.731367096097819691e-02,-8.412613131227909824e-02,-7.427746902317970690e-02,-2.499265663159149983e-02,-3.949338287409189657e-02,-1.811826730789670159e-02,-8.391983579716059960e-02
|
||||
6.713621404158050254e-02,5.068011873981870252e-02,-4.177375257387799801e-02,1.154374291374709975e-02,2.558898754392050119e-03,5.888537194940629722e-03,4.127682384197570165e-02,-3.949338287409189657e-02,-5.947269741072230137e-02,-2.178823207463989955e-02
|
||||
-4.183993948900609910e-02,5.068011873981870252e-02,1.427247526792889930e-02,-5.670610554934250001e-03,-1.257658268582039982e-02,6.201685656730160021e-03,-7.285394808472339667e-02,7.120997975363539678e-02,3.546193866076970125e-02,-1.350401824497050006e-02
|
||||
3.444336798240450054e-02,-4.464163650698899782e-02,-7.283766209689159811e-03,1.498661360748330083e-02,-4.422349842444640161e-02,-3.732595053201490098e-02,-2.902829807069099918e-03,-3.949338287409189657e-02,-2.139368094035999993e-02,7.206516329203029904e-03
|
||||
5.987113713954139715e-02,5.068011873981870252e-02,1.642809941569069870e-02,2.875809638242839833e-02,-4.147159270804409714e-02,-2.918409052548700047e-02,-2.867429443567860031e-02,-2.592261998182820038e-03,-2.396681493414269844e-03,-2.178823207463989955e-02
|
||||
-5.273755484206479882e-02,-4.464163650698899782e-02,-9.439390357450949676e-03,-5.670610554934250001e-03,3.970962592582259754e-02,4.471894645684260094e-02,2.655027262562750096e-02,-2.592261998182820038e-03,-1.811826730789670159e-02,-1.350401824497050006e-02
|
||||
-9.147093429830140468e-03,-4.464163650698899782e-02,-1.590626280073640167e-02,7.007254470726349826e-02,1.219056876180000040e-02,2.217225720799630151e-02,1.550535921336619952e-02,-2.592261998182820038e-03,-3.324878724762579674e-02,4.862758547755009764e-02
|
||||
-4.910501639104519755e-02,-4.464163650698899782e-02,2.505059600673789980e-02,8.100872220010799790e-03,2.044628591100669870e-02,1.778817874294279927e-02,5.232173725423699961e-02,-3.949338287409189657e-02,-4.118038518800790082e-02,7.206516329203029904e-03
|
||||
-4.183993948900609910e-02,-4.464163650698899782e-02,-4.931843709104429679e-02,-3.665644679856060184e-02,-7.072771253015849857e-03,-2.260797282790679916e-02,8.545647749102060209e-02,-3.949338287409189657e-02,-6.648814822283539983e-02,7.206516329203029904e-03
|
||||
-4.183993948900609910e-02,-4.464163650698899782e-02,4.121777711495139968e-02,-2.632783471735180084e-02,-3.183992270063620150e-02,-3.043668437264510085e-02,-3.603757004385269719e-02,2.942906133203560069e-03,3.365681290238470291e-02,-1.764612515980519894e-02
|
||||
-2.730978568492789874e-02,-4.464163650698899782e-02,-6.332999405149600247e-02,-5.042792957350569760e-02,-8.962994274508359616e-02,-1.043397213549750041e-01,5.232173725423699961e-02,-7.639450375000099436e-02,-5.615757309500619965e-02,-6.735140813782170000e-02
|
||||
4.170844488444359899e-02,-4.464163650698899782e-02,-6.440780612537699845e-02,3.564383776990089764e-02,1.219056876180000040e-02,-5.799374901012400302e-02,1.811790603972839864e-01,-7.639450375000099436e-02,-6.092541861022970299e-04,-5.078298047848289754e-02
|
||||
6.350367559056099842e-02,5.068011873981870252e-02,-2.560657146566450160e-02,1.154374291374709975e-02,6.447677737344290061e-02,4.847672799831700269e-02,3.023191042971450082e-02,-2.592261998182820038e-03,3.839324821169769891e-02,1.963283707370720027e-02
|
||||
-7.090024709716259699e-02,-4.464163650698899782e-02,-4.050329988046450294e-03,-4.009931749229690007e-02,-6.623874415566440021e-02,-7.866154748823310505e-02,5.232173725423699961e-02,-7.639450375000099436e-02,-5.140053526058249722e-02,-3.421455281914410201e-02
|
||||
-4.183993948900609910e-02,5.068011873981870252e-02,4.572166603000769880e-03,-5.387080026724189868e-02,-4.422349842444640161e-02,-2.730519975474979960e-02,-8.021722369289760457e-02,7.120997975363539678e-02,3.664579779339879884e-02,1.963283707370720027e-02
|
||||
-2.730978568492789874e-02,5.068011873981870252e-02,-7.283766209689159811e-03,-4.009931749229690007e-02,-1.120062982761920074e-02,-1.383981589779990050e-02,5.968501286241110343e-02,-3.949338287409189657e-02,-8.238148325810279449e-02,-2.593033898947460017e-02
|
||||
-3.457486258696700065e-02,-4.464163650698899782e-02,-3.746250427835440266e-02,-6.075654165471439799e-02,2.044628591100669870e-02,4.346635260968449710e-02,-1.394774321933030074e-02,-2.592261998182820038e-03,-3.075120986455629965e-02,-7.149351505265640061e-02
|
||||
6.713621404158050254e-02,5.068011873981870252e-02,-2.560657146566450160e-02,-4.009931749229690007e-02,-6.348683843926219983e-02,-5.987263978086120042e-02,-2.902829807069099918e-03,-3.949338287409189657e-02,-1.919704761394450121e-02,1.134862324403770016e-02
|
||||
-4.547247794002570037e-02,5.068011873981870252e-02,-2.452875939178359929e-02,5.974393262605470073e-02,5.310804470794310353e-03,1.496984258683710031e-02,-5.444575906428809897e-02,7.120997975363539678e-02,4.234489544960749752e-02,1.549073015887240078e-02
|
||||
-9.147093429830140468e-03,5.068011873981870252e-02,-1.806188694849819934e-02,-3.321357610482440076e-02,-2.083229983502719873e-02,1.215150643073130074e-02,-7.285394808472339667e-02,7.120997975363539678e-02,2.714857279071319972e-04,1.963283707370720027e-02
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,-1.482845072685549936e-02,-1.714684618924559867e-02,-5.696818394814720174e-03,8.393724889256879915e-03,-1.394774321933030074e-02,-1.854239580664649974e-03,-1.190068480150809939e-02,3.064409414368320182e-03
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,-2.991781976118810041e-02,-4.009931749229690007e-02,-3.321587555883730170e-02,-2.417371513685449835e-02,-1.026610541524320026e-02,-2.592261998182820038e-03,-1.290794225416879923e-02,3.064409414368320182e-03
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,-4.608500086940160029e-02,-5.670610554934250001e-03,-7.587041416307230279e-02,-6.143838208980879900e-02,-1.394774321933030074e-02,-3.949338287409189657e-02,-5.140053526058249722e-02,1.963283707370720027e-02
|
||||
-1.882016527791040067e-03,-4.464163650698899782e-02,-6.979686649478139548e-02,-1.255635194240680048e-02,-1.930069620102049918e-04,-9.142588970956939953e-03,7.072992627467229731e-02,-3.949338287409189657e-02,-6.291294991625119570e-02,4.034337164788070335e-02
|
||||
-1.882016527791040067e-03,-4.464163650698899782e-02,3.367309259778510089e-02,1.251584758070440062e-01,2.457414448561009990e-02,2.624318721126020146e-02,-1.026610541524320026e-02,-2.592261998182820038e-03,2.671425763351279944e-02,6.105390622205419948e-02
|
||||
6.350367559056099842e-02,5.068011873981870252e-02,-4.050329988046450294e-03,-1.255635194240680048e-02,1.030034574030749966e-01,4.878987646010649742e-02,5.600337505832399948e-02,-2.592261998182820038e-03,8.449528221240310000e-02,-1.764612515980519894e-02
|
||||
1.264813727628719998e-02,5.068011873981870252e-02,-2.021751109626000048e-02,-2.227739861197989939e-03,3.833367306762140020e-02,5.317395492515999966e-02,-6.584467611156170040e-03,3.430885887772629900e-02,-5.145307980263110273e-03,-9.361911330135799444e-03
|
||||
1.264813727628719998e-02,5.068011873981870252e-02,2.416542455238970041e-03,5.630106193231849965e-02,2.732605020201240090e-02,1.716188181936379939e-02,4.127682384197570165e-02,-3.949338287409189657e-02,3.711738233435969789e-03,7.348022696655839847e-02
|
||||
-9.147093429830140468e-03,5.068011873981870252e-02,-3.099563183506899924e-02,-2.632783471735180084e-02,-1.120062982761920074e-02,-1.000728964429089965e-03,-2.131101882750449997e-02,-2.592261998182820038e-03,6.209315616505399656e-03,2.791705090337660150e-02
|
||||
-3.094232413594750000e-02,5.068011873981870252e-02,2.828403222838059977e-02,7.007254470726349826e-02,-1.267806699165139883e-01,-1.068449090492910036e-01,-5.444575906428809897e-02,-4.798064067555100204e-02,-3.075120986455629965e-02,1.549073015887240078e-02
|
||||
-9.632801625429950054e-02,-4.464163650698899782e-02,-3.638469220447349689e-02,-7.452802442965950069e-02,-3.871968699164179961e-02,-2.761834821653930128e-02,1.550535921336619952e-02,-3.949338287409189657e-02,-7.408887149153539631e-02,-1.077697500466389974e-03
|
||||
5.383060374248070309e-03,-4.464163650698899782e-02,-5.794093368209150136e-02,-2.288496402361559975e-02,-6.761469701386560449e-02,-6.832764824917850199e-02,-5.444575906428809897e-02,-2.592261998182820038e-03,4.289568789252869857e-02,-8.391983579716059960e-02
|
||||
-1.035930931563389945e-01,-4.464163650698899782e-02,-3.746250427835440266e-02,-2.632783471735180084e-02,2.558898754392050119e-03,1.998021797546959896e-02,1.182372140927919965e-02,-2.592261998182820038e-03,-6.832974362442149896e-02,-2.593033898947460017e-02
|
||||
7.076875249260000666e-02,-4.464163650698899782e-02,1.211685112016709989e-02,4.252957915737339695e-02,7.135654166444850566e-02,5.348710338694950134e-02,5.232173725423699961e-02,-2.592261998182820038e-03,2.539313491544940155e-02,-5.219804415301099697e-03
|
||||
1.264813727628719998e-02,5.068011873981870252e-02,-2.237313524402180162e-02,-2.977070541108809906e-02,1.081461590359879960e-02,2.843522644378690054e-02,-2.131101882750449997e-02,3.430885887772629900e-02,-6.080248196314420352e-03,-1.077697500466389974e-03
|
||||
-1.641217033186929963e-02,-4.464163650698899782e-02,-3.530688013059259805e-02,-2.632783471735180084e-02,3.282986163481690228e-02,1.716188181936379939e-02,1.001830287073690040e-01,-3.949338287409189657e-02,-7.020931272868760620e-02,-7.977772888232589898e-02
|
||||
-3.820740103798660192e-02,-4.464163650698899782e-02,9.961226972405269262e-03,-4.698505887976939938e-02,-5.935897986465880211e-02,-5.298337362149149743e-02,-1.026610541524320026e-02,-3.949338287409189657e-02,-1.599826775813870117e-02,-4.249876664881350324e-02
|
||||
1.750521923228520000e-03,-4.464163650698899782e-02,-3.961812842611620034e-02,-1.009233664264470032e-01,-2.908801698423390050e-02,-3.012353591085559917e-02,4.495846164606279866e-02,-5.019470792810550031e-02,-6.832974362442149896e-02,-1.294830118603420011e-01
|
||||
4.534098333546320025e-02,-4.464163650698899782e-02,7.139651518361660176e-02,1.215130832538269907e-03,-9.824676969418109224e-03,-1.000728964429089965e-03,1.550535921336619952e-02,-3.949338287409189657e-02,-4.118038518800790082e-02,-7.149351505265640061e-02
|
||||
-7.090024709716259699e-02,5.068011873981870252e-02,-7.518592686418590354e-02,-4.009931749229690007e-02,-5.110326271545199972e-02,-1.509240974495799914e-02,-3.971920784793980114e-02,-2.592261998182820038e-03,-9.643322289178400675e-02,-3.421455281914410201e-02
|
||||
4.534098333546320025e-02,-4.464163650698899782e-02,-6.205954135808240159e-03,1.154374291374709975e-02,6.310082451524179348e-02,1.622243643399520069e-02,9.650139090328180291e-02,-3.949338287409189657e-02,4.289568789252869857e-02,-3.835665973397880263e-02
|
||||
-5.273755484206479882e-02,5.068011873981870252e-02,-4.069594049999709917e-02,-6.764228304218700139e-02,-3.183992270063620150e-02,-3.701280207022530216e-02,3.759518603788870178e-02,-3.949338287409189657e-02,-3.452371533034950118e-02,6.933812005172369786e-02
|
||||
-4.547247794002570037e-02,-4.464163650698899782e-02,-4.824062501716339796e-02,-1.944209332987930153e-02,-1.930069620102049918e-04,-1.603185513032660131e-02,6.704828847058519337e-02,-3.949338287409189657e-02,-2.479118743246069845e-02,1.963283707370720027e-02
|
||||
1.264813727628719998e-02,-4.464163650698899782e-02,-2.560657146566450160e-02,-4.009931749229690007e-02,-3.046396984243510131e-02,-4.515466207675319921e-02,7.809320188284639419e-02,-7.639450375000099436e-02,-7.212845460195599356e-02,1.134862324403770016e-02
|
||||
4.534098333546320025e-02,-4.464163650698899782e-02,5.199589785376040191e-02,-5.387080026724189868e-02,6.310082451524179348e-02,6.476044801137270657e-02,-1.026610541524320026e-02,3.430885887772629900e-02,3.723201120896890010e-02,1.963283707370720027e-02
|
||||
-2.004470878288880029e-02,-4.464163650698899782e-02,4.572166603000769880e-03,9.761551025715360652e-02,5.310804470794310353e-03,-2.072908205716959829e-02,6.336665066649820044e-02,-3.949338287409189657e-02,1.255315281338930007e-02,1.134862324403770016e-02
|
||||
-4.910501639104519755e-02,-4.464163650698899782e-02,-6.440780612537699845e-02,-1.020709899795499975e-01,-2.944912678412469915e-03,-1.540555820674759969e-02,6.336665066649820044e-02,-4.724261825803279663e-02,-3.324878724762579674e-02,-5.492508739331759815e-02
|
||||
-7.816532399920170238e-02,-4.464163650698899782e-02,-1.698407487461730050e-02,-1.255635194240680048e-02,-1.930069620102049918e-04,-1.352666743601040056e-02,7.072992627467229731e-02,-3.949338287409189657e-02,-4.118038518800790082e-02,-9.220404962683000083e-02
|
||||
-7.090024709716259699e-02,-4.464163650698899782e-02,-5.794093368209150136e-02,-8.141376581713200000e-02,-4.559945128264750180e-02,-2.887094206369749880e-02,-4.340084565202689815e-02,-2.592261998182820038e-03,1.143797379512540100e-03,-5.219804415301099697e-03
|
||||
5.623859868852180283e-02,5.068011873981870252e-02,9.961226972405269262e-03,4.941532054484590319e-02,-4.320865536613589623e-03,-1.227407358885230018e-02,-4.340084565202689815e-02,3.430885887772629900e-02,6.078775415074400001e-02,3.205915781821130212e-02
|
||||
-2.730978568492789874e-02,-4.464163650698899782e-02,8.864150836571099701e-02,-2.518021116424929914e-02,2.182223876920789951e-02,4.252690722431590187e-02,-3.235593223976569732e-02,3.430885887772629900e-02,2.863770518940129874e-03,7.762233388139309909e-02
|
||||
1.750521923228520000e-03,5.068011873981870252e-02,-5.128142061927360405e-03,-1.255635194240680048e-02,-1.532848840222260020e-02,-1.383981589779990050e-02,8.142083605192099172e-03,-3.949338287409189657e-02,-6.080248196314420352e-03,-6.735140813782170000e-02
|
||||
-1.882016527791040067e-03,-4.464163650698899782e-02,-6.440780612537699845e-02,1.154374291374709975e-02,2.732605020201240090e-02,3.751653183568340322e-02,-1.394774321933030074e-02,3.430885887772629900e-02,1.178390038357590014e-02,-5.492508739331759815e-02
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,1.750591148957160101e-02,-2.288496402361559975e-02,6.034891879883950289e-02,4.440579799505309927e-02,3.023191042971450082e-02,-2.592261998182820038e-03,3.723201120896890010e-02,-1.077697500466389974e-03
|
||||
1.628067572730669890e-02,5.068011873981870252e-02,-4.500718879552070145e-02,6.318680331979099896e-02,1.081461590359879960e-02,-3.744320408500199904e-04,6.336665066649820044e-02,-3.949338287409189657e-02,-3.075120986455629965e-02,3.620126473304600273e-02
|
||||
-9.269547780327989928e-02,-4.464163650698899782e-02,2.828403222838059977e-02,-1.599922263614299983e-02,3.695772020942030001e-02,2.499059336410210108e-02,5.600337505832399948e-02,-3.949338287409189657e-02,-5.145307980263110273e-03,-1.077697500466389974e-03
|
||||
5.987113713954139715e-02,5.068011873981870252e-02,4.121777711495139968e-02,1.154374291374709975e-02,4.108557878402369773e-02,7.071026878537380045e-02,-3.603757004385269719e-02,3.430885887772629900e-02,-1.090443584737709956e-02,-3.007244590430930078e-02
|
||||
-2.730978568492789874e-02,-4.464163650698899782e-02,6.492964274033119487e-02,-2.227739861197989939e-03,-2.496015840963049931e-02,-1.728444897748479883e-02,2.286863482154040048e-02,-3.949338287409189657e-02,-6.117659509433449883e-02,-6.320930122298699938e-02
|
||||
2.354575262934580082e-02,5.068011873981870252e-02,-3.207344390894990155e-02,-4.009931749229690007e-02,-3.183992270063620150e-02,-2.166852744253820046e-02,-1.394774321933030074e-02,-2.592261998182820038e-03,-1.090443584737709956e-02,1.963283707370720027e-02
|
||||
-9.632801625429950054e-02,-4.464163650698899782e-02,-7.626373893806680238e-02,-4.354218818603310115e-02,-4.559945128264750180e-02,-3.482076283769860309e-02,8.142083605192099172e-03,-3.949338287409189657e-02,-5.947269741072230137e-02,-8.391983579716059960e-02
|
||||
2.717829108036539862e-02,-4.464163650698899782e-02,4.984027370599859730e-02,-5.501842382034440038e-02,-2.944912678412469915e-03,4.064801645357869753e-02,-5.812739686837520292e-02,5.275941931568080279e-02,-5.295879323920039961e-02,-5.219804415301099697e-03
|
||||
1.991321417832630017e-02,5.068011873981870252e-02,4.552902541047500196e-02,2.990571983224480160e-02,-6.211088558106100249e-02,-5.580170977759729700e-02,-7.285394808472339667e-02,2.692863470254440103e-02,4.560080841412490066e-02,4.034337164788070335e-02
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,-9.439390357450949676e-03,2.362754385640800005e-03,1.182945896190920002e-03,3.751653183568340322e-02,-5.444575906428809897e-02,5.017634085436720182e-02,-2.595242443518940012e-02,1.066170822852360034e-01
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,-3.207344390894990155e-02,-2.288496402361559975e-02,-4.972730985725089953e-02,-4.014428668812060341e-02,3.023191042971450082e-02,-3.949338287409189657e-02,-1.260973855604090033e-01,1.549073015887240078e-02
|
||||
1.991321417832630017e-02,-4.464163650698899782e-02,4.572166603000769880e-03,-2.632783471735180084e-02,2.319819162740899970e-02,1.027261565999409987e-02,6.704828847058519337e-02,-3.949338287409189657e-02,-2.364455757213410059e-02,-4.664087356364819692e-02
|
||||
-8.543040090124079389e-02,-4.464163650698899782e-02,2.073934771121430098e-02,-2.632783471735180084e-02,5.310804470794310353e-03,1.966706951368000014e-02,-2.902829807069099918e-03,-2.592261998182820038e-03,-2.364455757213410059e-02,3.064409414368320182e-03
|
||||
1.991321417832630017e-02,5.068011873981870252e-02,1.427247526792889930e-02,6.318680331979099896e-02,1.494247447820220079e-02,2.029336643725910064e-02,-4.708248345611389801e-02,3.430885887772629900e-02,4.666077235681449775e-02,9.004865462589720093e-02
|
||||
2.354575262934580082e-02,-4.464163650698899782e-02,1.101977498433290015e-01,6.318680331979099896e-02,1.356652162000110060e-02,-3.294187206696139875e-02,-2.499265663159149983e-02,2.065544415363990138e-02,9.924022573398999514e-02,2.377494398854190089e-02
|
||||
-3.094232413594750000e-02,5.068011873981870252e-02,1.338730381358059929e-03,-5.670610554934250001e-03,6.447677737344290061e-02,4.941617338368559792e-02,-4.708248345611389801e-02,1.081111006295440019e-01,8.379676636552239877e-02,3.064409414368320182e-03
|
||||
4.897352178648269744e-02,5.068011873981870252e-02,5.846277029704580186e-02,7.007254470726349826e-02,1.356652162000110060e-02,2.060651489904859884e-02,-2.131101882750449997e-02,3.430885887772629900e-02,2.200405045615050001e-02,2.791705090337660150e-02
|
||||
5.987113713954139715e-02,-4.464163650698899782e-02,-2.129532317014089932e-02,8.728689817594480205e-02,4.521343735862710239e-02,3.156671106168230240e-02,-4.708248345611389801e-02,7.120997975363539678e-02,7.912108138965789905e-02,1.356118306890790048e-01
|
||||
-5.637009329308430294e-02,5.068011873981870252e-02,-1.051720243133190055e-02,2.531522568869210010e-02,2.319819162740899970e-02,4.002171952999959703e-02,-3.971920784793980114e-02,3.430885887772629900e-02,2.061233072136409855e-02,5.691179930721949887e-02
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,-4.716281294328249912e-02,-2.227739861197989939e-03,-1.945634697682600139e-02,-4.296262284422640298e-02,3.391354823380159783e-02,-3.949338287409189657e-02,2.736770754260900093e-02,2.791705090337660150e-02
|
||||
-4.910501639104519755e-02,-4.464163650698899782e-02,4.572166603000769880e-03,1.154374291374709975e-02,-3.734373413344069942e-02,-1.853704282464289921e-02,-1.762938102341739949e-02,-2.592261998182820038e-03,-3.980959436433750137e-02,-2.178823207463989955e-02
|
||||
6.350367559056099842e-02,-4.464163650698899782e-02,1.750591148957160101e-02,2.187235499495579841e-02,8.062710187196569719e-03,2.154596028441720101e-02,-3.603757004385269719e-02,3.430885887772629900e-02,1.990842087631829876e-02,1.134862324403770016e-02
|
||||
4.897352178648269744e-02,5.068011873981870252e-02,8.109682384854470516e-02,2.187235499495579841e-02,4.383748450042589812e-02,6.413415108779360607e-02,-5.444575906428809897e-02,7.120997975363539678e-02,3.243322577960189995e-02,4.862758547755009764e-02
|
||||
5.383060374248070309e-03,5.068011873981870252e-02,3.475090467166599972e-02,-1.080116308095460057e-03,1.525377602983150060e-01,1.987879896572929961e-01,-6.180903467246220279e-02,1.852344432601940039e-01,1.556684454070180086e-02,7.348022696655839847e-02
|
||||
-5.514554978810590376e-03,-4.464163650698899782e-02,2.397278393285700096e-02,8.100872220010799790e-03,-3.459182841703849903e-02,-3.889169284096249957e-02,2.286863482154040048e-02,-3.949338287409189657e-02,-1.599826775813870117e-02,-1.350401824497050006e-02
|
||||
-5.514554978810590376e-03,5.068011873981870252e-02,-8.361578283570040432e-03,-2.227739861197989939e-03,-3.321587555883730170e-02,-6.363042132233559522e-02,-3.603757004385269719e-02,-2.592261998182820038e-03,8.058546423866649877e-02,7.206516329203029904e-03
|
||||
-8.906293935226029801e-02,-4.464163650698899782e-02,-6.117436990373419786e-02,-2.632783471735180084e-02,-5.523112129005539744e-02,-5.454911593043910295e-02,4.127682384197570165e-02,-7.639450375000099436e-02,-9.393564550871469354e-02,-5.492508739331759815e-02
|
||||
3.444336798240450054e-02,5.068011873981870252e-02,-1.894705840284650021e-03,-1.255635194240680048e-02,3.833367306762140020e-02,1.371724873967889932e-02,7.809320188284639419e-02,-3.949338287409189657e-02,4.551890466127779880e-03,-9.634615654166470144e-02
|
||||
-5.273755484206479882e-02,-4.464163650698899782e-02,-6.225218197761509670e-02,-2.632783471735180084e-02,-5.696818394814720174e-03,-5.071658967693000106e-03,3.023191042971450082e-02,-3.949338287409189657e-02,-3.075120986455629965e-02,-7.149351505265640061e-02
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,1.642809941569069870e-02,4.658001526274530187e-03,9.438663045397699403e-03,1.058576412178359981e-02,-2.867429443567860031e-02,3.430885887772629900e-02,3.896836603088559697e-02,1.190434030297399942e-01
|
||||
-6.363517019512339445e-02,5.068011873981870252e-02,9.618619288287730273e-02,1.045012516446259948e-01,-2.944912678412469915e-03,-4.758510505903469807e-03,-6.584467611156170040e-03,-2.592261998182820038e-03,2.269202256674450122e-02,7.348022696655839847e-02
|
||||
-9.632801625429950054e-02,-4.464163650698899782e-02,-6.979686649478139548e-02,-6.764228304218700139e-02,-1.945634697682600139e-02,-1.070833127990459925e-02,1.550535921336619952e-02,-3.949338287409189657e-02,-4.687948284421659950e-02,-7.977772888232589898e-02
|
||||
1.628067572730669890e-02,5.068011873981870252e-02,-2.129532317014089932e-02,-9.113481248670509197e-03,3.420581449301800248e-02,4.785043107473799934e-02,7.788079970179680352e-04,-2.592261998182820038e-03,-1.290794225416879923e-02,2.377494398854190089e-02
|
||||
-4.183993948900609910e-02,5.068011873981870252e-02,-5.362968538656789907e-02,-4.009931749229690007e-02,-8.412613131227909824e-02,-7.177228132886340206e-02,-2.902829807069099918e-03,-3.949338287409189657e-02,-7.212845460195599356e-02,-3.007244590430930078e-02
|
||||
-7.453278554818210111e-02,-4.464163650698899782e-02,4.337340126271319735e-02,-3.321357610482440076e-02,1.219056876180000040e-02,2.518648827290310109e-04,6.336665066649820044e-02,-3.949338287409189657e-02,-2.712864555432650121e-02,-4.664087356364819692e-02
|
||||
-5.514554978810590376e-03,-4.464163650698899782e-02,5.630714614928399725e-02,-3.665644679856060184e-02,-4.835135699904979933e-02,-4.296262284422640298e-02,-7.285394808472339667e-02,3.799897096531720114e-02,5.078151336297320045e-02,5.691179930721949887e-02
|
||||
-9.269547780327989928e-02,-4.464163650698899782e-02,-8.165279930747129655e-02,-5.731367096097819691e-02,-6.073493272285990230e-02,-6.801449978738899338e-02,4.864009945014990260e-02,-7.639450375000099436e-02,-6.648814822283539983e-02,-2.178823207463989955e-02
|
||||
5.383060374248070309e-03,-4.464163650698899782e-02,4.984027370599859730e-02,9.761551025715360652e-02,-1.532848840222260020e-02,-1.634500359211620013e-02,-6.584467611156170040e-03,-2.592261998182820038e-03,1.703713241477999851e-02,-1.350401824497050006e-02
|
||||
3.444336798240450054e-02,5.068011873981870252e-02,1.112755619172099975e-01,7.695828609473599757e-02,-3.183992270063620150e-02,-3.388131745233000092e-02,-2.131101882750449997e-02,-2.592261998182820038e-03,2.801650652326400162e-02,7.348022696655839847e-02
|
||||
2.354575262934580082e-02,-4.464163650698899782e-02,6.169620651868849837e-02,5.285819123858220142e-02,-3.459182841703849903e-02,-4.891244361822749687e-02,-2.867429443567860031e-02,-2.592261998182820038e-03,5.472400334817909689e-02,-5.219804415301099697e-03
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,1.427247526792889930e-02,4.252957915737339695e-02,-3.046396984243510131e-02,-1.313877426218630021e-03,-4.340084565202689815e-02,-2.592261998182820038e-03,-3.324878724762579674e-02,1.549073015887240078e-02
|
||||
-2.730978568492789874e-02,-4.464163650698899782e-02,4.768464955823679963e-02,-4.698505887976939938e-02,3.420581449301800248e-02,5.724488492842390308e-02,-8.021722369289760457e-02,1.302517731550900115e-01,4.506616833626150148e-02,1.314697237742440128e-01
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,1.211685112016709989e-02,3.908670846363720280e-02,5.484510736603499803e-02,4.440579799505309927e-02,4.460445801105040325e-03,-2.592261998182820038e-03,4.560080841412490066e-02,-1.077697500466389974e-03
|
||||
-3.094232413594750000e-02,-4.464163650698899782e-02,5.649978676881649634e-03,-9.113481248670509197e-03,1.907033305280559851e-02,6.827982580309210209e-03,7.441156407875940126e-02,-3.949338287409189657e-02,-4.118038518800790082e-02,-4.249876664881350324e-02
|
||||
3.081082953138499989e-02,5.068011873981870252e-02,4.660683748435590079e-02,-1.599922263614299983e-02,2.044628591100669870e-02,5.066876723084379891e-02,-5.812739686837520292e-02,7.120997975363539678e-02,6.209315616505399656e-03,7.206516329203029904e-03
|
||||
-4.183993948900609910e-02,-4.464163650698899782e-02,1.285205550993039902e-01,6.318680331979099896e-02,-3.321587555883730170e-02,-3.262872360517189707e-02,1.182372140927919965e-02,-3.949338287409189657e-02,-1.599826775813870117e-02,-5.078298047848289754e-02
|
||||
-3.094232413594750000e-02,5.068011873981870252e-02,5.954058237092670069e-02,1.215130832538269907e-03,1.219056876180000040e-02,3.156671106168230240e-02,-4.340084565202689815e-02,3.430885887772629900e-02,1.482271084126630077e-02,7.206516329203029904e-03
|
||||
-5.637009329308430294e-02,-4.464163650698899782e-02,9.295275666123460623e-02,-1.944209332987930153e-02,1.494247447820220079e-02,2.342485105515439842e-02,-2.867429443567860031e-02,2.545258986750810123e-02,2.605608963368469949e-02,4.034337164788070335e-02
|
||||
-6.000263174410389727e-02,5.068011873981870252e-02,1.535028734180979987e-02,-1.944209332987930153e-02,3.695772020942030001e-02,4.816357953652750101e-02,1.918699701745330000e-02,-2.592261998182820038e-03,-3.075120986455629965e-02,-1.077697500466389974e-03
|
||||
-4.910501639104519755e-02,5.068011873981870252e-02,-5.128142061927360405e-03,-4.698505887976939938e-02,-2.083229983502719873e-02,-2.041593359538010008e-02,-6.917231028063640375e-02,7.120997975363539678e-02,6.123790751970099866e-02,-3.835665973397880263e-02
|
||||
2.354575262934580082e-02,-4.464163650698899782e-02,7.031870310973570293e-02,2.531522568869210010e-02,-3.459182841703849903e-02,-1.446611282137899926e-02,-3.235593223976569732e-02,-2.592261998182820038e-03,-1.919704761394450121e-02,-9.361911330135799444e-03
|
||||
1.750521923228520000e-03,-4.464163650698899782e-02,-4.050329988046450294e-03,-5.670610554934250001e-03,-8.448724111216979540e-03,-2.386056667506489953e-02,5.232173725423699961e-02,-3.949338287409189657e-02,-8.944018957797799166e-03,-1.350401824497050006e-02
|
||||
-3.457486258696700065e-02,5.068011873981870252e-02,-8.168937664037369826e-04,7.007254470726349826e-02,3.970962592582259754e-02,6.695248724389940564e-02,-6.549067247654929980e-02,1.081111006295440019e-01,2.671425763351279944e-02,7.348022696655839847e-02
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,-4.392937672163980262e-02,6.318680331979099896e-02,-4.320865536613589623e-03,1.622243643399520069e-02,-1.394774321933030074e-02,-2.592261998182820038e-03,-3.452371533034950118e-02,1.134862324403770016e-02
|
||||
6.713621404158050254e-02,5.068011873981870252e-02,2.073934771121430098e-02,-5.670610554934250001e-03,2.044628591100669870e-02,2.624318721126020146e-02,-2.902829807069099918e-03,-2.592261998182820038e-03,8.640282933063080789e-03,3.064409414368320182e-03
|
||||
-2.730978568492789874e-02,5.068011873981870252e-02,6.061839444480759953e-02,4.941532054484590319e-02,8.511607024645979902e-02,8.636769187485039689e-02,-2.902829807069099918e-03,3.430885887772629900e-02,3.781447882634390162e-02,4.862758547755009764e-02
|
||||
-1.641217033186929963e-02,-4.464163650698899782e-02,-1.051720243133190055e-02,1.215130832538269907e-03,-3.734373413344069942e-02,-3.576020822306719832e-02,1.182372140927919965e-02,-3.949338287409189657e-02,-2.139368094035999993e-02,-3.421455281914410201e-02
|
||||
-1.882016527791040067e-03,5.068011873981870252e-02,-3.315125598283080038e-02,-1.829446977677679984e-02,3.145390877661580209e-02,4.284005568610550069e-02,-1.394774321933030074e-02,1.991742173612169944e-02,1.022564240495780000e-02,2.791705090337660150e-02
|
||||
-1.277963188084970010e-02,-4.464163650698899782e-02,-6.548561819925780014e-02,-6.993753018282070077e-02,1.182945896190920002e-03,1.684873335757430118e-02,-2.902829807069099918e-03,-7.020396503291909812e-03,-3.075120986455629965e-02,-5.078298047848289754e-02
|
||||
-5.514554978810590376e-03,-4.464163650698899782e-02,4.337340126271319735e-02,8.728689817594480205e-02,1.356652162000110060e-02,7.141131042098750048e-03,-1.394774321933030074e-02,-2.592261998182820038e-03,4.234489544960749752e-02,-1.764612515980519894e-02
|
||||
-9.147093429830140468e-03,-4.464163650698899782e-02,-6.225218197761509670e-02,-7.452802442965950069e-02,-2.358420555142939912e-02,-1.321351897422090062e-02,4.460445801105040325e-03,-3.949338287409189657e-02,-3.581672810154919867e-02,-4.664087356364819692e-02
|
||||
-4.547247794002570037e-02,5.068011873981870252e-02,6.385183066645029604e-02,7.007254470726349826e-02,1.332744202834990066e-01,1.314610703725430096e-01,-3.971920784793980114e-02,1.081111006295440019e-01,7.573758845754760549e-02,8.590654771106250032e-02
|
||||
-5.273755484206479882e-02,-4.464163650698899782e-02,3.043965637614240091e-02,-7.452802442965950069e-02,-2.358420555142939912e-02,-1.133462820348369975e-02,-2.902829807069099918e-03,-2.592261998182820038e-03,-3.075120986455629965e-02,-1.077697500466389974e-03
|
||||
1.628067572730669890e-02,5.068011873981870252e-02,7.247432725749750060e-02,7.695828609473599757e-02,-8.448724111216979540e-03,5.575388733151089883e-03,-6.584467611156170040e-03,-2.592261998182820038e-03,-2.364455757213410059e-02,6.105390622205419948e-02
|
||||
4.534098333546320025e-02,-4.464163650698899782e-02,-1.913969902237900103e-02,2.187235499495579841e-02,2.732605020201240090e-02,-1.352666743601040056e-02,1.001830287073690040e-01,-3.949338287409189657e-02,1.776347786711730131e-02,-1.350401824497050006e-02
|
||||
-4.183993948900609910e-02,-4.464163650698899782e-02,-6.656343027313869898e-02,-4.698505887976939938e-02,-3.734373413344069942e-02,-4.327577130601600180e-02,4.864009945014990260e-02,-3.949338287409189657e-02,-5.615757309500619965e-02,-1.350401824497050006e-02
|
||||
-5.637009329308430294e-02,5.068011873981870252e-02,-6.009655782985329903e-02,-3.665644679856060184e-02,-8.825398988688250290e-02,-7.083283594349480683e-02,-1.394774321933030074e-02,-3.949338287409189657e-02,-7.814091066906959926e-02,-1.046303703713340055e-01
|
||||
7.076875249260000666e-02,-4.464163650698899782e-02,6.924089103585480409e-02,3.793908501382069892e-02,2.182223876920789951e-02,1.504458729887179960e-03,-3.603757004385269719e-02,3.910600459159439823e-02,7.763278919555950675e-02,1.066170822852360034e-01
|
||||
1.750521923228520000e-03,5.068011873981870252e-02,5.954058237092670069e-02,-2.227739861197989939e-03,6.172487165704060308e-02,6.319470570242499696e-02,-5.812739686837520292e-02,1.081111006295440019e-01,6.898221163630259556e-02,1.273276168594099922e-01
|
||||
-1.882016527791040067e-03,-4.464163650698899782e-02,-2.668438353954540043e-02,4.941532054484590319e-02,5.897296594063840269e-02,-1.603185513032660131e-02,-4.708248345611389801e-02,7.120997975363539678e-02,1.335989800130079896e-01,1.963283707370720027e-02
|
||||
2.354575262934580082e-02,5.068011873981870252e-02,-2.021751109626000048e-02,-3.665644679856060184e-02,-1.395253554402150001e-02,-1.509240974495799914e-02,5.968501286241110343e-02,-3.949338287409189657e-02,-9.643322289178400675e-02,-1.764612515980519894e-02
|
||||
-2.004470878288880029e-02,-4.464163650698899782e-02,-4.608500086940160029e-02,-9.862811928581330378e-02,-7.587041416307230279e-02,-5.987263978086120042e-02,-1.762938102341739949e-02,-3.949338287409189657e-02,-5.140053526058249722e-02,-4.664087356364819692e-02
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,7.139651518361660176e-02,8.100872220010799790e-03,3.833367306762140020e-02,1.590928797220559840e-02,-1.762938102341739949e-02,3.430885887772629900e-02,7.341007804911610368e-02,8.590654771106250032e-02
|
||||
-6.363517019512339445e-02,5.068011873981870252e-02,-7.949717515970949888e-02,-5.670610554934250001e-03,-7.174255558846899528e-02,-6.644875747844139480e-02,-1.026610541524320026e-02,-3.949338287409189657e-02,-1.811826730789670159e-02,-5.492508739331759815e-02
|
||||
1.628067572730669890e-02,5.068011873981870252e-02,9.961226972405269262e-03,-4.354218818603310115e-02,-9.650970703608929835e-02,-9.463211903949929338e-02,-3.971920784793980114e-02,-3.949338287409189657e-02,1.703713241477999851e-02,7.206516329203029904e-03
|
||||
6.713621404158050254e-02,-4.464163650698899782e-02,-3.854031635223530150e-02,-2.632783471735180084e-02,-3.183992270063620150e-02,-2.636575436938120090e-02,8.142083605192099172e-03,-3.949338287409189657e-02,-2.712864555432650121e-02,3.064409414368320182e-03
|
||||
4.534098333546320025e-02,5.068011873981870252e-02,1.966153563733339868e-02,3.908670846363720280e-02,2.044628591100669870e-02,2.593003874947069978e-02,8.142083605192099172e-03,-2.592261998182820038e-03,-3.303712578676999863e-03,1.963283707370720027e-02
|
||||
4.897352178648269744e-02,-4.464163650698899782e-02,2.720622015449970094e-02,-2.518021116424929914e-02,2.319819162740899970e-02,1.841447566652189977e-02,-6.180903467246220279e-02,8.006624876385350087e-02,7.222365081991240221e-02,3.205915781821130212e-02
|
||||
4.170844488444359899e-02,-4.464163650698899782e-02,-8.361578283570040432e-03,-2.632783471735180084e-02,2.457414448561009990e-02,1.622243643399520069e-02,7.072992627467229731e-02,-3.949338287409189657e-02,-4.836172480289190057e-02,-3.007244590430930078e-02
|
||||
-2.367724723390840155e-02,-4.464163650698899782e-02,-1.590626280073640167e-02,-1.255635194240680048e-02,2.044628591100669870e-02,4.127431337715779802e-02,-4.340084565202689815e-02,3.430885887772629900e-02,1.407245251576850001e-02,-9.361911330135799444e-03
|
||||
-3.820740103798660192e-02,5.068011873981870252e-02,4.572166603000769880e-03,3.564383776990089764e-02,-1.120062982761920074e-02,5.888537194940629722e-03,-4.708248345611389801e-02,3.430885887772629900e-02,1.630495279994180133e-02,-1.077697500466389974e-03
|
||||
4.897352178648269744e-02,-4.464163650698899782e-02,-4.285156464775889684e-02,-5.387080026724189868e-02,4.521343735862710239e-02,5.004247030726469841e-02,3.391354823380159783e-02,-2.592261998182820038e-03,-2.595242443518940012e-02,-6.320930122298699938e-02
|
||||
4.534098333546320025e-02,5.068011873981870252e-02,5.649978676881649634e-03,5.630106193231849965e-02,6.447677737344290061e-02,8.918602803095619647e-02,-3.971920784793980114e-02,7.120997975363539678e-02,1.556684454070180086e-02,-9.361911330135799444e-03
|
||||
4.534098333546320025e-02,5.068011873981870252e-02,-3.530688013059259805e-02,6.318680331979099896e-02,-4.320865536613589623e-03,-1.627025888008149911e-03,-1.026610541524320026e-02,-2.592261998182820038e-03,1.556684454070180086e-02,5.691179930721949887e-02
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,2.397278393285700096e-02,-2.288496402361559975e-02,-2.496015840963049931e-02,-2.605260590759169922e-02,-3.235593223976569732e-02,-2.592261998182820038e-03,3.723201120896890010e-02,3.205915781821130212e-02
|
||||
-7.453278554818210111e-02,5.068011873981870252e-02,-1.806188694849819934e-02,8.100872220010799790e-03,-1.945634697682600139e-02,-2.480001206043359885e-02,-6.549067247654929980e-02,3.430885887772629900e-02,6.731721791468489591e-02,-1.764612515980519894e-02
|
||||
-8.179786245022120650e-02,5.068011873981870252e-02,4.229558918883229851e-02,-1.944209332987930153e-02,3.970962592582259754e-02,5.755803339021339782e-02,-6.917231028063640375e-02,1.081111006295440019e-01,4.718616788601970313e-02,-3.835665973397880263e-02
|
||||
-6.726770864614299572e-02,-4.464163650698899782e-02,-5.470749746044879791e-02,-2.632783471735180084e-02,-7.587041416307230279e-02,-8.210618056791800512e-02,4.864009945014990260e-02,-7.639450375000099436e-02,-8.682899321629239386e-02,-1.046303703713340055e-01
|
||||
5.383060374248070309e-03,-4.464163650698899782e-02,-2.972517914165530208e-03,4.941532054484590319e-02,7.410844738085080319e-02,7.071026878537380045e-02,4.495846164606279866e-02,-2.592261998182820038e-03,-1.498586820292070049e-03,-9.361911330135799444e-03
|
||||
-1.882016527791040067e-03,-4.464163650698899782e-02,-6.656343027313869898e-02,1.215130832538269907e-03,-2.944912678412469915e-03,3.070201038834840124e-03,1.182372140927919965e-02,-2.592261998182820038e-03,-2.028874775162960165e-02,-2.593033898947460017e-02
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,-1.267282657909369996e-02,2.875809638242839833e-02,-1.808039411862490120e-02,-5.071658967693000106e-03,-4.708248345611389801e-02,3.430885887772629900e-02,2.337484127982079885e-02,-5.219804415301099697e-03
|
||||
-5.514554978810590376e-03,5.068011873981870252e-02,-4.177375257387799801e-02,-4.354218818603310115e-02,-7.999827273767569358e-02,-7.615635979391689736e-02,-3.235593223976569732e-02,-3.949338287409189657e-02,1.022564240495780000e-02,-9.361911330135799444e-03
|
||||
5.623859868852180283e-02,5.068011873981870252e-02,-3.099563183506899924e-02,8.100872220010799790e-03,1.907033305280559851e-02,2.123281182262769934e-02,3.391354823380159783e-02,-3.949338287409189657e-02,-2.952762274177360077e-02,-5.906719430815229877e-02
|
||||
9.015598825267629943e-03,5.068011873981870252e-02,-5.128142061927360405e-03,-6.419941234845069622e-02,6.998058880624739853e-02,8.386250418053420308e-02,-3.971920784793980114e-02,7.120997975363539678e-02,3.953987807202419963e-02,1.963283707370720027e-02
|
||||
-6.726770864614299572e-02,-4.464163650698899782e-02,-5.901874575597240019e-02,3.220096707616459941e-02,-5.110326271545199972e-02,-4.953874054180659736e-02,-1.026610541524320026e-02,-3.949338287409189657e-02,2.007840549823790115e-03,2.377494398854190089e-02
|
||||
2.717829108036539862e-02,5.068011873981870252e-02,2.505059600673789980e-02,1.498661360748330083e-02,2.595009734381130070e-02,4.847672799831700269e-02,-3.971920784793980114e-02,3.430885887772629900e-02,7.837142301823850701e-03,2.377494398854190089e-02
|
||||
-2.367724723390840155e-02,-4.464163650698899782e-02,-4.608500086940160029e-02,-3.321357610482440076e-02,3.282986163481690228e-02,3.626393798852529937e-02,3.759518603788870178e-02,-2.592261998182820038e-03,-3.324878724762579674e-02,1.134862324403770016e-02
|
||||
4.897352178648269744e-02,5.068011873981870252e-02,3.494354529119849794e-03,7.007254470726349826e-02,-8.448724111216979540e-03,1.340410027788939938e-02,-5.444575906428809897e-02,3.430885887772629900e-02,1.331596790892770020e-02,3.620126473304600273e-02
|
||||
-5.273755484206479882e-02,-4.464163650698899782e-02,5.415152200152219958e-02,-2.632783471735180084e-02,-5.523112129005539744e-02,-3.388131745233000092e-02,-1.394774321933030074e-02,-3.949338287409189657e-02,-7.408887149153539631e-02,-5.906719430815229877e-02
|
||||
4.170844488444359899e-02,-4.464163650698899782e-02,-4.500718879552070145e-02,3.449621432008449784e-02,4.383748450042589812e-02,-1.571870666853709964e-02,3.759518603788870178e-02,-1.440062067847370023e-02,8.989869327767099905e-02,7.206516329203029904e-03
|
||||
5.623859868852180283e-02,-4.464163650698899782e-02,-5.794093368209150136e-02,-7.965857695567990157e-03,5.209320164963270050e-02,4.910302492189610318e-02,5.600337505832399948e-02,-2.141183364489639834e-02,-2.832024254799870092e-02,4.448547856271539702e-02
|
||||
-3.457486258696700065e-02,5.068011873981870252e-02,-5.578530953432969675e-02,-1.599922263614299983e-02,-9.824676969418109224e-03,-7.889995123798789270e-03,3.759518603788870178e-02,-3.949338287409189657e-02,-5.295879323920039961e-02,2.791705090337660150e-02
|
||||
8.166636784565869944e-02,5.068011873981870252e-02,1.338730381358059929e-03,3.564383776990089764e-02,1.263946559924939983e-01,9.106491880169340081e-02,1.918699701745330000e-02,3.430885887772629900e-02,8.449528221240310000e-02,-3.007244590430930078e-02
|
||||
-1.882016527791040067e-03,5.068011873981870252e-02,3.043965637614240091e-02,5.285819123858220142e-02,3.970962592582259754e-02,5.661858800484489973e-02,-3.971920784793980114e-02,7.120997975363539678e-02,2.539313491544940155e-02,2.791705090337660150e-02
|
||||
1.107266754538149961e-01,5.068011873981870252e-02,6.727790750762559745e-03,2.875809638242839833e-02,-2.771206412603280031e-02,-7.263698200219739949e-03,-4.708248345611389801e-02,3.430885887772629900e-02,2.007840549823790115e-03,7.762233388139309909e-02
|
||||
-3.094232413594750000e-02,-4.464163650698899782e-02,4.660683748435590079e-02,1.498661360748330083e-02,-1.670444126042380101e-02,-4.703355284749029946e-02,7.788079970179680352e-04,-2.592261998182820038e-03,6.345592137206540473e-02,-2.593033898947460017e-02
|
||||
1.750521923228520000e-03,5.068011873981870252e-02,2.612840808061879863e-02,-9.113481248670509197e-03,2.457414448561009990e-02,3.845597722105199845e-02,-2.131101882750449997e-02,3.430885887772629900e-02,9.436409146079870192e-03,3.064409414368320182e-03
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,4.552902541047500196e-02,2.875809638242839833e-02,1.219056876180000040e-02,-1.383981589779990050e-02,2.655027262562750096e-02,-3.949338287409189657e-02,4.613233103941480340e-02,3.620126473304600273e-02
|
||||
3.081082953138499989e-02,-4.464163650698899782e-02,4.013996504107050084e-02,7.695828609473599757e-02,1.769438019460449832e-02,3.782968029747289795e-02,-2.867429443567860031e-02,3.430885887772629900e-02,-1.498586820292070049e-03,1.190434030297399942e-01
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,-1.806188694849819934e-02,6.662967401352719310e-02,-5.110326271545199972e-02,-1.665815205390569834e-02,-7.653558588881050062e-02,3.430885887772629900e-02,-1.190068480150809939e-02,-1.350401824497050006e-02
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,1.427247526792889930e-02,1.498661360748330083e-02,5.484510736603499803e-02,4.722413415115889884e-02,7.072992627467229731e-02,-3.949338287409189657e-02,-3.324878724762579674e-02,-5.906719430815229877e-02
|
||||
9.256398319871740610e-02,-4.464163650698899782e-02,3.690652881942779739e-02,2.187235499495579841e-02,-2.496015840963049931e-02,-1.665815205390569834e-02,7.788079970179680352e-04,-3.949338287409189657e-02,-2.251217192966049885e-02,-2.178823207463989955e-02
|
||||
6.713621404158050254e-02,-4.464163650698899782e-02,3.494354529119849794e-03,3.564383776990089764e-02,4.934129593323050011e-02,3.125356259989280072e-02,7.072992627467229731e-02,-3.949338287409189657e-02,-6.092541861022970299e-04,1.963283707370720027e-02
|
||||
1.750521923228520000e-03,-4.464163650698899782e-02,-7.087467856866229432e-02,-2.288496402361559975e-02,-1.568959820211340015e-03,-1.000728964429089965e-03,2.655027262562750096e-02,-3.949338287409189657e-02,-2.251217192966049885e-02,7.206516329203029904e-03
|
||||
3.081082953138499989e-02,-4.464163650698899782e-02,-3.315125598283080038e-02,-2.288496402361559975e-02,-4.697540414084860200e-02,-8.116673518254939601e-02,1.038646665114559969e-01,-7.639450375000099436e-02,-3.980959436433750137e-02,-5.492508739331759815e-02
|
||||
2.717829108036539862e-02,5.068011873981870252e-02,9.403056873511560221e-02,9.761551025715360652e-02,-3.459182841703849903e-02,-3.200242668159279658e-02,-4.340084565202689815e-02,-2.592261998182820038e-03,3.664579779339879884e-02,1.066170822852360034e-01
|
||||
1.264813727628719998e-02,5.068011873981870252e-02,3.582871674554689856e-02,4.941532054484590319e-02,5.346915450783389784e-02,7.415490186505870052e-02,-6.917231028063640375e-02,1.450122215054540087e-01,4.560080841412490066e-02,4.862758547755009764e-02
|
||||
7.440129094361959405e-02,-4.464163650698899782e-02,3.151746845002330322e-02,1.010583809508899950e-01,4.658939021682820258e-02,3.689023491210430272e-02,1.550535921336619952e-02,-2.592261998182820038e-03,3.365681290238470291e-02,4.448547856271539702e-02
|
||||
-4.183993948900609910e-02,-4.464163650698899782e-02,-6.548561819925780014e-02,-4.009931749229690007e-02,-5.696818394814720174e-03,1.434354566325799982e-02,-4.340084565202689815e-02,3.430885887772629900e-02,7.026862549151949647e-03,-1.350401824497050006e-02
|
||||
-8.906293935226029801e-02,-4.464163650698899782e-02,-4.177375257387799801e-02,-1.944209332987930153e-02,-6.623874415566440021e-02,-7.427746902317970690e-02,8.142083605192099172e-03,-3.949338287409189657e-02,1.143797379512540100e-03,-3.007244590430930078e-02
|
||||
2.354575262934580082e-02,5.068011873981870252e-02,-3.961812842611620034e-02,-5.670610554934250001e-03,-4.835135699904979933e-02,-3.325502052875090042e-02,1.182372140927919965e-02,-3.949338287409189657e-02,-1.016435479455120028e-01,-6.735140813782170000e-02
|
||||
-4.547247794002570037e-02,-4.464163650698899782e-02,-3.854031635223530150e-02,-2.632783471735180084e-02,-1.532848840222260020e-02,8.781618063081050515e-04,-3.235593223976569732e-02,-2.592261998182820038e-03,1.143797379512540100e-03,-3.835665973397880263e-02
|
||||
-2.367724723390840155e-02,5.068011873981870252e-02,-2.560657146566450160e-02,4.252957915737339695e-02,-5.385516843185429725e-02,-4.765984977106939996e-02,-2.131101882750449997e-02,-3.949338287409189657e-02,1.143797379512540100e-03,1.963283707370720027e-02
|
||||
-9.996055470531900466e-02,-4.464163650698899782e-02,-2.345094731790270046e-02,-6.419941234845069622e-02,-5.798302700645770191e-02,-6.018578824265070210e-02,1.182372140927919965e-02,-3.949338287409189657e-02,-1.811826730789670159e-02,-5.078298047848289754e-02
|
||||
-2.730978568492789874e-02,-4.464163650698899782e-02,-6.656343027313869898e-02,-1.123996020607579971e-01,-4.972730985725089953e-02,-4.139688053527879746e-02,7.788079970179680352e-04,-3.949338287409189657e-02,-3.581672810154919867e-02,-9.361911330135799444e-03
|
||||
3.081082953138499989e-02,5.068011873981870252e-02,3.259528052390420205e-02,4.941532054484590319e-02,-4.009563984984299695e-02,-4.358891976780549654e-02,-6.917231028063640375e-02,3.430885887772629900e-02,6.301661511474640487e-02,3.064409414368320182e-03
|
||||
-1.035930931563389945e-01,5.068011873981870252e-02,-4.608500086940160029e-02,-2.632783471735180084e-02,-2.496015840963049931e-02,-2.480001206043359885e-02,3.023191042971450082e-02,-3.949338287409189657e-02,-3.980959436433750137e-02,-5.492508739331759815e-02
|
||||
6.713621404158050254e-02,5.068011873981870252e-02,-2.991781976118810041e-02,5.744868538213489945e-02,-1.930069620102049918e-04,-1.571870666853709964e-02,7.441156407875940126e-02,-5.056371913686460301e-02,-3.845911230135379971e-02,7.206516329203029904e-03
|
||||
-5.273755484206479882e-02,-4.464163650698899782e-02,-1.267282657909369996e-02,-6.075654165471439799e-02,-1.930069620102049918e-04,8.080576427467340075e-03,1.182372140927919965e-02,-2.592261998182820038e-03,-2.712864555432650121e-02,-5.078298047848289754e-02
|
||||
-2.730978568492789874e-02,5.068011873981870252e-02,-1.590626280073640167e-02,-2.977070541108809906e-02,3.934851612593179802e-03,-6.875805026395569565e-04,4.127682384197570165e-02,-3.949338287409189657e-02,-2.364455757213410059e-02,1.134862324403770016e-02
|
||||
-3.820740103798660192e-02,5.068011873981870252e-02,7.139651518361660176e-02,-5.731367096097819691e-02,1.539137131565160022e-01,1.558866503921270130e-01,7.788079970179680352e-04,7.194800217115350505e-02,5.027649338998960160e-02,6.933812005172369786e-02
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,-3.099563183506899924e-02,2.187235499495579841e-02,8.062710187196569719e-03,8.706873351046409346e-03,4.460445801105040325e-03,-2.592261998182820038e-03,9.436409146079870192e-03,1.134862324403770016e-02
|
||||
1.264813727628719998e-02,5.068011873981870252e-02,2.609183074771409820e-04,-1.140872838930430053e-02,3.970962592582259754e-02,5.724488492842390308e-02,-3.971920784793980114e-02,5.608052019451260223e-02,2.405258322689299982e-02,3.205915781821130212e-02
|
||||
6.713621404158050254e-02,-4.464163650698899782e-02,3.690652881942779739e-02,-5.042792957350569760e-02,-2.358420555142939912e-02,-3.450761437590899733e-02,4.864009945014990260e-02,-3.949338287409189657e-02,-2.595242443518940012e-02,-3.835665973397880263e-02
|
||||
4.534098333546320025e-02,-4.464163650698899782e-02,3.906215296718960200e-02,4.597244985110970211e-02,6.686757328995440036e-03,-2.417371513685449835e-02,8.142083605192099172e-03,-1.255556463467829946e-02,6.432823302367089713e-02,5.691179930721949887e-02
|
||||
6.713621404158050254e-02,5.068011873981870252e-02,-1.482845072685549936e-02,5.859630917623830093e-02,-5.935897986465880211e-02,-3.450761437590899733e-02,-6.180903467246220279e-02,1.290620876969899959e-02,-5.145307980263110273e-03,4.862758547755009764e-02
|
||||
2.717829108036539862e-02,-4.464163650698899782e-02,6.727790750762559745e-03,3.564383776990089764e-02,7.961225881365530110e-02,7.071026878537380045e-02,1.550535921336619952e-02,3.430885887772629900e-02,4.067226371449769728e-02,1.134862324403770016e-02
|
||||
5.623859868852180283e-02,-4.464163650698899782e-02,-6.871905442090049665e-02,-6.878990659528949614e-02,-1.930069620102049918e-04,-1.000728964429089965e-03,4.495846164606279866e-02,-3.764832683029650101e-02,-4.836172480289190057e-02,-1.077697500466389974e-03
|
||||
3.444336798240450054e-02,5.068011873981870252e-02,-9.439390357450949676e-03,5.974393262605470073e-02,-3.596778127523959923e-02,-7.576846662009279788e-03,-7.653558588881050062e-02,7.120997975363539678e-02,1.100810104587249955e-02,-2.178823207463989955e-02
|
||||
2.354575262934580082e-02,-4.464163650698899782e-02,1.966153563733339868e-02,-1.255635194240680048e-02,8.374011738825870577e-02,3.876912568284150012e-02,6.336665066649820044e-02,-2.592261998182820038e-03,6.604820616309839409e-02,4.862758547755009764e-02
|
||||
4.897352178648269744e-02,5.068011873981870252e-02,7.462995140525929827e-02,6.662967401352719310e-02,-9.824676969418109224e-03,-2.253322811587220049e-03,-4.340084565202689815e-02,3.430885887772629900e-02,3.365681290238470291e-02,1.963283707370720027e-02
|
||||
3.081082953138499989e-02,5.068011873981870252e-02,-8.361578283570040432e-03,4.658001526274530187e-03,1.494247447820220079e-02,2.749578105841839898e-02,8.142083605192099172e-03,-8.127430129569179762e-03,-2.952762274177360077e-02,5.691179930721949887e-02
|
||||
-1.035930931563389945e-01,5.068011873981870252e-02,-2.345094731790270046e-02,-2.288496402361559975e-02,-8.687803702868139577e-02,-6.770135132559949864e-02,-1.762938102341739949e-02,-3.949338287409189657e-02,-7.814091066906959926e-02,-7.149351505265640061e-02
|
||||
1.628067572730669890e-02,5.068011873981870252e-02,-4.608500086940160029e-02,1.154374291374709975e-02,-3.321587555883730170e-02,-1.603185513032660131e-02,-1.026610541524320026e-02,-2.592261998182820038e-03,-4.398540256559110156e-02,-4.249876664881350324e-02
|
||||
-6.000263174410389727e-02,5.068011873981870252e-02,5.415152200152219958e-02,-1.944209332987930153e-02,-4.972730985725089953e-02,-4.891244361822749687e-02,2.286863482154040048e-02,-3.949338287409189657e-02,-4.398540256559110156e-02,-5.219804415301099697e-03
|
||||
-2.730978568492789874e-02,-4.464163650698899782e-02,-3.530688013059259805e-02,-2.977070541108809906e-02,-5.660707414825649764e-02,-5.862004593370299943e-02,3.023191042971450082e-02,-3.949338287409189657e-02,-4.986846773523059828e-02,-1.294830118603420011e-01
|
||||
4.170844488444359899e-02,-4.464163650698899782e-02,-3.207344390894990155e-02,-6.190416520781699683e-02,7.961225881365530110e-02,5.098191569263330059e-02,5.600337505832399948e-02,-9.972486173364639508e-03,4.506616833626150148e-02,-5.906719430815229877e-02
|
||||
-8.179786245022120650e-02,-4.464163650698899782e-02,-8.165279930747129655e-02,-4.009931749229690007e-02,2.558898754392050119e-03,-1.853704282464289921e-02,7.072992627467229731e-02,-3.949338287409189657e-02,-1.090443584737709956e-02,-9.220404962683000083e-02
|
||||
-4.183993948900609910e-02,-4.464163650698899782e-02,4.768464955823679963e-02,5.974393262605470073e-02,1.277706088506949944e-01,1.280164372928579986e-01,-2.499265663159149983e-02,1.081111006295440019e-01,6.389312063683939835e-02,4.034337164788070335e-02
|
||||
-1.277963188084970010e-02,-4.464163650698899782e-02,6.061839444480759953e-02,5.285819123858220142e-02,4.796534307502930278e-02,2.937467182915549924e-02,-1.762938102341739949e-02,3.430885887772629900e-02,7.021129819331020649e-02,7.206516329203029904e-03
|
||||
6.713621404158050254e-02,-4.464163650698899782e-02,5.630714614928399725e-02,7.351541540099980343e-02,-1.395253554402150001e-02,-3.920484130275200124e-02,-3.235593223976569732e-02,-2.592261998182820038e-03,7.573758845754760549e-02,3.620126473304600273e-02
|
||||
-5.273755484206479882e-02,5.068011873981870252e-02,9.834181703063900326e-02,8.728689817594480205e-02,6.034891879883950289e-02,4.878987646010649742e-02,-5.812739686837520292e-02,1.081111006295440019e-01,8.449528221240310000e-02,4.034337164788070335e-02
|
||||
5.383060374248070309e-03,-4.464163650698899782e-02,5.954058237092670069e-02,-5.616604740787570216e-02,2.457414448561009990e-02,5.286080646337049799e-02,-4.340084565202689815e-02,5.091436327188540029e-02,-4.219859706946029777e-03,-3.007244590430930078e-02
|
||||
8.166636784565869944e-02,-4.464163650698899782e-02,3.367309259778510089e-02,8.100872220010799790e-03,5.209320164963270050e-02,5.661858800484489973e-02,-1.762938102341739949e-02,3.430885887772629900e-02,3.486419309615960277e-02,6.933812005172369786e-02
|
||||
3.081082953138499989e-02,5.068011873981870252e-02,5.630714614928399725e-02,7.695828609473599757e-02,4.934129593323050011e-02,-1.227407358885230018e-02,-3.603757004385269719e-02,7.120997975363539678e-02,1.200533820015380060e-01,9.004865462589720093e-02
|
||||
1.750521923228520000e-03,-4.464163650698899782e-02,-6.548561819925780014e-02,-5.670610554934250001e-03,-7.072771253015849857e-03,-1.947648821001150138e-02,4.127682384197570165e-02,-3.949338287409189657e-02,-3.303712578676999863e-03,7.206516329203029904e-03
|
||||
-4.910501639104519755e-02,-4.464163650698899782e-02,1.608549173157310108e-01,-4.698505887976939938e-02,-2.908801698423390050e-02,-1.978963667180099958e-02,-4.708248345611389801e-02,3.430885887772629900e-02,2.801650652326400162e-02,1.134862324403770016e-02
|
||||
-2.730978568492789874e-02,5.068011873981870252e-02,-5.578530953432969675e-02,2.531522568869210010e-02,-7.072771253015849857e-03,-2.354741821327540133e-02,5.232173725423699961e-02,-3.949338287409189657e-02,-5.145307980263110273e-03,-5.078298047848289754e-02
|
||||
7.803382939463919532e-02,5.068011873981870252e-02,-2.452875939178359929e-02,-4.239456463293059946e-02,6.686757328995440036e-03,5.286080646337049799e-02,-6.917231028063640375e-02,8.080427118137170628e-02,-3.712834601047360072e-02,5.691179930721949887e-02
|
||||
1.264813727628719998e-02,-4.464163650698899782e-02,-3.638469220447349689e-02,4.252957915737339695e-02,-1.395253554402150001e-02,1.293437758520510003e-02,-2.683347553363510038e-02,5.156973385758089994e-03,-4.398540256559110156e-02,7.206516329203029904e-03
|
||||
4.170844488444359899e-02,-4.464163650698899782e-02,-8.361578283570040432e-03,-5.731367096097819691e-02,8.062710187196569719e-03,-3.137612975801370302e-02,1.517259579645879874e-01,-7.639450375000099436e-02,-8.023654024890179703e-02,-1.764612515980519894e-02
|
||||
4.897352178648269744e-02,-4.464163650698899782e-02,-4.177375257387799801e-02,1.045012516446259948e-01,3.558176735121919981e-02,-2.573945744580210040e-02,1.774974225931970073e-01,-7.639450375000099436e-02,-1.290794225416879923e-02,1.549073015887240078e-02
|
||||
-1.641217033186929963e-02,5.068011873981870252e-02,1.274427430254229943e-01,9.761551025715360652e-02,1.631842733640340160e-02,1.747503028115330106e-02,-2.131101882750449997e-02,3.430885887772629900e-02,3.486419309615960277e-02,3.064409414368320182e-03
|
||||
-7.453278554818210111e-02,5.068011873981870252e-02,-7.734155101194770121e-02,-4.698505887976939938e-02,-4.697540414084860200e-02,-3.262872360517189707e-02,4.460445801105040325e-03,-3.949338287409189657e-02,-7.212845460195599356e-02,-1.764612515980519894e-02
|
||||
3.444336798240450054e-02,5.068011873981870252e-02,2.828403222838059977e-02,-3.321357610482440076e-02,-4.559945128264750180e-02,-9.768885894535990141e-03,-5.076412126020100196e-02,-2.592261998182820038e-03,-5.947269741072230137e-02,-2.178823207463989955e-02
|
||||
-3.457486258696700065e-02,5.068011873981870252e-02,-2.560657146566450160e-02,-1.714684618924559867e-02,1.182945896190920002e-03,-2.879619735166290186e-03,8.142083605192099172e-03,-1.550765430475099967e-02,1.482271084126630077e-02,4.034337164788070335e-02
|
||||
-5.273755484206479882e-02,5.068011873981870252e-02,-6.225218197761509670e-02,1.154374291374709975e-02,-8.448724111216979540e-03,-3.669965360843580049e-02,1.222728555318910032e-01,-7.639450375000099436e-02,-8.682899321629239386e-02,3.064409414368320182e-03
|
||||
5.987113713954139715e-02,-4.464163650698899782e-02,-8.168937664037369826e-04,-8.485663651086830517e-02,7.548440023905199359e-02,7.947842571548069390e-02,4.460445801105040325e-03,3.430885887772629900e-02,2.337484127982079885e-02,2.791705090337660150e-02
|
||||
6.350367559056099842e-02,5.068011873981870252e-02,8.864150836571099701e-02,7.007254470726349826e-02,2.044628591100669870e-02,3.751653183568340322e-02,-5.076412126020100196e-02,7.120997975363539678e-02,2.930041326858690010e-02,7.348022696655839847e-02
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,-3.207344390894990155e-02,-2.632783471735180084e-02,4.246153164222479792e-02,-1.039518281811509931e-02,1.590892335727620011e-01,-7.639450375000099436e-02,-1.190068480150809939e-02,-3.835665973397880263e-02
|
||||
5.383060374248070309e-03,5.068011873981870252e-02,3.043965637614240091e-02,8.384402748220859403e-02,-3.734373413344069942e-02,-4.734670130927989828e-02,1.550535921336619952e-02,-3.949338287409189657e-02,8.640282933063080789e-03,1.549073015887240078e-02
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,8.883414898524360018e-03,4.252957915737339695e-02,-4.284754556624519733e-02,-2.104223051895920057e-02,-3.971920784793980114e-02,-2.592261998182820038e-03,-1.811826730789670159e-02,7.206516329203029904e-03
|
||||
1.264813727628719998e-02,-4.464163650698899782e-02,6.727790750762559745e-03,-5.616604740787570216e-02,-7.587041416307230279e-02,-6.644875747844139480e-02,-2.131101882750449997e-02,-3.764832683029650101e-02,-1.811826730789670159e-02,-9.220404962683000083e-02
|
||||
7.440129094361959405e-02,5.068011873981870252e-02,-2.021751109626000048e-02,4.597244985110970211e-02,7.410844738085080319e-02,3.281930490884039930e-02,-3.603757004385269719e-02,7.120997975363539678e-02,1.063542767417259977e-01,3.620126473304600273e-02
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,-2.452875939178359929e-02,3.564383776990089764e-02,-7.072771253015849857e-03,-3.192768196955810076e-03,-1.394774321933030074e-02,-2.592261998182820038e-03,1.556684454070180086e-02,1.549073015887240078e-02
|
||||
-5.514554978810590376e-03,5.068011873981870252e-02,-1.159501450521270051e-02,1.154374291374709975e-02,-2.220825269322829892e-02,-1.540555820674759969e-02,-2.131101882750449997e-02,-2.592261998182820038e-03,1.100810104587249955e-02,6.933812005172369786e-02
|
||||
1.264813727628719998e-02,-4.464163650698899782e-02,2.612840808061879863e-02,6.318680331979099896e-02,1.250187031342930022e-01,9.169121572527250130e-02,6.336665066649820044e-02,-2.592261998182820038e-03,5.757285620242599822e-02,-2.178823207463989955e-02
|
||||
-3.457486258696700065e-02,-4.464163650698899782e-02,-5.901874575597240019e-02,1.215130832538269907e-03,-5.385516843185429725e-02,-7.803525056465400456e-02,6.704828847058519337e-02,-7.639450375000099436e-02,-2.139368094035999993e-02,1.549073015887240078e-02
|
||||
6.713621404158050254e-02,5.068011873981870252e-02,-3.638469220447349689e-02,-8.485663651086830517e-02,-7.072771253015849857e-03,1.966706951368000014e-02,-5.444575906428809897e-02,3.430885887772629900e-02,1.143797379512540100e-03,3.205915781821130212e-02
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,-2.452875939178359929e-02,4.658001526274530187e-03,-2.633611126783170012e-02,-2.636575436938120090e-02,1.550535921336619952e-02,-3.949338287409189657e-02,-1.599826775813870117e-02,-2.593033898947460017e-02
|
||||
9.015598825267629943e-03,5.068011873981870252e-02,1.858372356345249984e-02,3.908670846363720280e-02,1.769438019460449832e-02,1.058576412178359981e-02,1.918699701745330000e-02,-2.592261998182820038e-03,1.630495279994180133e-02,-1.764612515980519894e-02
|
||||
-9.269547780327989928e-02,5.068011873981870252e-02,-9.027529589851850111e-02,-5.731367096097819691e-02,-2.496015840963049931e-02,-3.043668437264510085e-02,-6.584467611156170040e-03,-2.592261998182820038e-03,2.405258322689299982e-02,3.064409414368320182e-03
|
||||
7.076875249260000666e-02,-4.464163650698899782e-02,-5.128142061927360405e-03,-5.670610554934250001e-03,8.786797596286209655e-02,1.029645603496960049e-01,1.182372140927919965e-02,3.430885887772629900e-02,-8.944018957797799166e-03,2.791705090337660150e-02
|
||||
-1.641217033186929963e-02,-4.464163650698899782e-02,-5.255187331268700024e-02,-3.321357610482440076e-02,-4.422349842444640161e-02,-3.638650514664620167e-02,1.918699701745330000e-02,-3.949338287409189657e-02,-6.832974362442149896e-02,-3.007244590430930078e-02
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,-2.237313524402180162e-02,2.875809638242839833e-02,-6.623874415566440021e-02,-4.515466207675319921e-02,-6.180903467246220279e-02,-2.592261998182820038e-03,2.863770518940129874e-03,-5.492508739331759815e-02
|
||||
1.264813727628719998e-02,-4.464163650698899782e-02,-2.021751109626000048e-02,-1.599922263614299983e-02,1.219056876180000040e-02,2.123281182262769934e-02,-7.653558588881050062e-02,1.081111006295440019e-01,5.988072306548120061e-02,-2.178823207463989955e-02
|
||||
-3.820740103798660192e-02,-4.464163650698899782e-02,-5.470749746044879791e-02,-7.797089512339580586e-02,-3.321587555883730170e-02,-8.649025903297140327e-02,1.406810445523269948e-01,-7.639450375000099436e-02,-1.919704761394450121e-02,-5.219804415301099697e-03
|
||||
4.534098333546320025e-02,-4.464163650698899782e-02,-6.205954135808240159e-03,-1.599922263614299983e-02,1.250187031342930022e-01,1.251981011367520047e-01,1.918699701745330000e-02,3.430885887772629900e-02,3.243322577960189995e-02,-5.219804415301099697e-03
|
||||
7.076875249260000666e-02,5.068011873981870252e-02,-1.698407487461730050e-02,2.187235499495579841e-02,4.383748450042589812e-02,5.630543954305530091e-02,3.759518603788870178e-02,-2.592261998182820038e-03,-7.020931272868760620e-02,-1.764612515980519894e-02
|
||||
-7.453278554818210111e-02,5.068011873981870252e-02,5.522933407540309841e-02,-4.009931749229690007e-02,5.346915450783389784e-02,5.317395492515999966e-02,-4.340084565202689815e-02,7.120997975363539678e-02,6.123790751970099866e-02,-3.421455281914410201e-02
|
||||
5.987113713954139715e-02,5.068011873981870252e-02,7.678557555302109594e-02,2.531522568869210010e-02,1.182945896190920002e-03,1.684873335757430118e-02,-5.444575906428809897e-02,3.430885887772629900e-02,2.993564839653250001e-02,4.448547856271539702e-02
|
||||
7.440129094361959405e-02,-4.464163650698899782e-02,1.858372356345249984e-02,6.318680331979099896e-02,6.172487165704060308e-02,4.284005568610550069e-02,8.142083605192099172e-03,-2.592261998182820038e-03,5.803912766389510147e-02,-5.906719430815229877e-02
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,-2.237313524402180162e-02,-3.206595255172180192e-02,-4.972730985725089953e-02,-6.864079671096809387e-02,7.809320188284639419e-02,-7.085933561861459951e-02,-6.291294991625119570e-02,-3.835665973397880263e-02
|
||||
-7.090024709716259699e-02,-4.464163650698899782e-02,9.295275666123460623e-02,1.269136646684959971e-02,2.044628591100669870e-02,4.252690722431590187e-02,7.788079970179680352e-04,3.598276718899090076e-04,-5.454415271109520208e-02,-1.077697500466389974e-03
|
||||
2.354575262934580082e-02,5.068011873981870252e-02,-3.099563183506899924e-02,-5.670610554934250001e-03,-1.670444126042380101e-02,1.778817874294279927e-02,-3.235593223976569732e-02,-2.592261998182820038e-03,-7.408887149153539631e-02,-3.421455281914410201e-02
|
||||
-5.273755484206479882e-02,5.068011873981870252e-02,3.906215296718960200e-02,-4.009931749229690007e-02,-5.696818394814720174e-03,-1.290037051243130006e-02,1.182372140927919965e-02,-3.949338287409189657e-02,1.630495279994180133e-02,3.064409414368320182e-03
|
||||
6.713621404158050254e-02,-4.464163650698899782e-02,-6.117436990373419786e-02,-4.009931749229690007e-02,-2.633611126783170012e-02,-2.448686359864400003e-02,3.391354823380159783e-02,-3.949338287409189657e-02,-5.615757309500619965e-02,-5.906719430815229877e-02
|
||||
1.750521923228520000e-03,-4.464163650698899782e-02,-8.361578283570040432e-03,-6.419941234845069622e-02,-3.871968699164179961e-02,-2.448686359864400003e-02,4.460445801105040325e-03,-3.949338287409189657e-02,-6.468302246445030435e-02,-5.492508739331759815e-02
|
||||
2.354575262934580082e-02,5.068011873981870252e-02,-3.746250427835440266e-02,-4.698505887976939938e-02,-9.100589560328480043e-02,-7.553006287033779687e-02,-3.235593223976569732e-02,-3.949338287409189657e-02,-3.075120986455629965e-02,-1.350401824497050006e-02
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,-1.375063865297449991e-02,-1.599922263614299983e-02,-3.596778127523959923e-02,-2.198167590432769866e-02,-1.394774321933030074e-02,-2.592261998182820038e-03,-2.595242443518940012e-02,-1.077697500466389974e-03
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,7.355213933137849658e-02,-4.124694104539940176e-02,-4.320865536613589623e-03,-1.352666743601040056e-02,-1.394774321933030074e-02,-1.116217163146459961e-03,4.289568789252869857e-02,4.448547856271539702e-02
|
||||
-1.882016527791040067e-03,5.068011873981870252e-02,-2.452875939178359929e-02,5.285819123858220142e-02,2.732605020201240090e-02,3.000096875273459973e-02,3.023191042971450082e-02,-2.592261998182820038e-03,-2.139368094035999993e-02,3.620126473304600273e-02
|
||||
1.264813727628719998e-02,-4.464163650698899782e-02,3.367309259778510089e-02,3.334859052598110329e-02,3.007795591841460128e-02,2.718263259662880016e-02,-2.902829807069099918e-03,8.847085473348980864e-03,3.119299070280229930e-02,2.791705090337660150e-02
|
||||
7.440129094361959405e-02,-4.464163650698899782e-02,3.475090467166599972e-02,9.417263956341730136e-02,5.759701308243719842e-02,2.029336643725910064e-02,2.286863482154040048e-02,-2.592261998182820038e-03,7.380214692004880006e-02,-2.178823207463989955e-02
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,-3.854031635223530150e-02,5.285819123858220142e-02,7.686035309725310072e-02,1.164299442066459994e-01,-3.971920784793980114e-02,7.120997975363539678e-02,-2.251217192966049885e-02,-1.350401824497050006e-02
|
||||
-9.147093429830140468e-03,5.068011873981870252e-02,-3.961812842611620034e-02,-4.009931749229690007e-02,-8.448724111216979540e-03,1.622243643399520069e-02,-6.549067247654929980e-02,7.120997975363539678e-02,1.776347786711730131e-02,-6.735140813782170000e-02
|
||||
9.015598825267629943e-03,5.068011873981870252e-02,-1.894705840284650021e-03,2.187235499495579841e-02,-3.871968699164179961e-02,-2.480001206043359885e-02,-6.584467611156170040e-03,-3.949338287409189657e-02,-3.980959436433750137e-02,-1.350401824497050006e-02
|
||||
6.713621404158050254e-02,5.068011873981870252e-02,-3.099563183506899924e-02,4.658001526274530187e-03,2.457414448561009990e-02,3.563764106494619888e-02,-2.867429443567860031e-02,3.430885887772629900e-02,2.337484127982079885e-02,8.176444079622779970e-02
|
||||
1.750521923228520000e-03,-4.464163650698899782e-02,-4.608500086940160029e-02,-3.321357610482440076e-02,-7.311850844667000526e-02,-8.147988364433890462e-02,4.495846164606279866e-02,-6.938329078357829971e-02,-6.117659509433449883e-02,-7.977772888232589898e-02
|
||||
-9.147093429830140468e-03,5.068011873981870252e-02,1.338730381358059929e-03,-2.227739861197989939e-03,7.961225881365530110e-02,7.008397186179469995e-02,3.391354823380159783e-02,-2.592261998182820038e-03,2.671425763351279944e-02,8.176444079622779970e-02
|
||||
-5.514554978810590376e-03,-4.464163650698899782e-02,6.492964274033119487e-02,3.564383776990089764e-02,-1.568959820211340015e-03,1.496984258683710031e-02,-1.394774321933030074e-02,7.288388806489919797e-04,-1.811826730789670159e-02,3.205915781821130212e-02
|
||||
9.619652164973699349e-02,-4.464163650698899782e-02,4.013996504107050084e-02,-5.731367096097819691e-02,4.521343735862710239e-02,6.068951800810880315e-02,-2.131101882750449997e-02,3.615391492152170150e-02,1.255315281338930007e-02,2.377494398854190089e-02
|
||||
-7.453278554818210111e-02,-4.464163650698899782e-02,-2.345094731790270046e-02,-5.670610554934250001e-03,-2.083229983502719873e-02,-1.415296435958940044e-02,1.550535921336619952e-02,-3.949338287409189657e-02,-3.845911230135379971e-02,-3.007244590430930078e-02
|
||||
5.987113713954139715e-02,5.068011873981870252e-02,5.307370992764130074e-02,5.285819123858220142e-02,3.282986163481690228e-02,1.966706951368000014e-02,-1.026610541524320026e-02,3.430885887772629900e-02,5.520503808961670089e-02,-1.077697500466389974e-03
|
||||
-2.367724723390840155e-02,-4.464163650698899782e-02,4.013996504107050084e-02,-1.255635194240680048e-02,-9.824676969418109224e-03,-1.000728964429089965e-03,-2.902829807069099918e-03,-2.592261998182820038e-03,-1.190068480150809939e-02,-3.835665973397880263e-02
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,-2.021751109626000048e-02,-5.387080026724189868e-02,3.145390877661580209e-02,2.060651489904859884e-02,5.600337505832399948e-02,-3.949338287409189657e-02,-1.090443584737709956e-02,-1.077697500466389974e-03
|
||||
1.628067572730669890e-02,5.068011873981870252e-02,1.427247526792889930e-02,1.215130832538269907e-03,1.182945896190920002e-03,-2.135537898074869878e-02,-3.235593223976569732e-02,3.430885887772629900e-02,7.496833602773420036e-02,4.034337164788070335e-02
|
||||
1.991321417832630017e-02,-4.464163650698899782e-02,-3.422906805671169922e-02,5.515343848250200270e-02,6.722868308984519814e-02,7.415490186505870052e-02,-6.584467611156170040e-03,3.283281404268990206e-02,2.472532334280450050e-02,6.933812005172369786e-02
|
||||
8.893144474769780483e-02,-4.464163650698899782e-02,6.727790750762559745e-03,2.531522568869210010e-02,3.007795591841460128e-02,8.706873351046409346e-03,6.336665066649820044e-02,-3.949338287409189657e-02,9.436409146079870192e-03,3.205915781821130212e-02
|
||||
1.991321417832630017e-02,-4.464163650698899782e-02,4.572166603000769880e-03,4.597244985110970211e-02,-1.808039411862490120e-02,-5.454911593043910295e-02,6.336665066649820044e-02,-3.949338287409189657e-02,2.866072031380889965e-02,6.105390622205419948e-02
|
||||
-2.367724723390840155e-02,-4.464163650698899782e-02,3.043965637614240091e-02,-5.670610554934250001e-03,8.236416453005759863e-02,9.200436418706199604e-02,-1.762938102341739949e-02,7.120997975363539678e-02,3.304707235493409972e-02,3.064409414368320182e-03
|
||||
9.619652164973699349e-02,-4.464163650698899782e-02,5.199589785376040191e-02,7.925353333865589600e-02,5.484510736603499803e-02,3.657708645031480105e-02,-7.653558588881050062e-02,1.413221094178629955e-01,9.864637430492799453e-02,6.105390622205419948e-02
|
||||
2.354575262934580082e-02,5.068011873981870252e-02,6.169620651868849837e-02,6.203917986997459916e-02,2.457414448561009990e-02,-3.607335668485669999e-02,-9.126213710515880539e-02,1.553445353507079962e-01,1.333957338374689994e-01,8.176444079622779970e-02
|
||||
7.076875249260000666e-02,5.068011873981870252e-02,-7.283766209689159811e-03,4.941532054484590319e-02,6.034891879883950289e-02,-4.445362044113949918e-03,-5.444575906428809897e-02,1.081111006295440019e-01,1.290194116001679991e-01,5.691179930721949887e-02
|
||||
3.081082953138499989e-02,-4.464163650698899782e-02,5.649978676881649634e-03,1.154374291374709975e-02,7.823630595545419397e-02,7.791268340653299818e-02,-4.340084565202689815e-02,1.081111006295440019e-01,6.604820616309839409e-02,1.963283707370720027e-02
|
||||
-1.882016527791040067e-03,-4.464163650698899782e-02,5.415152200152219958e-02,-6.649465948908450663e-02,7.273249452264969606e-02,5.661858800484489973e-02,-4.340084565202689815e-02,8.486339447772170419e-02,8.449528221240310000e-02,4.862758547755009764e-02
|
||||
4.534098333546320025e-02,5.068011873981870252e-02,-8.361578283570040432e-03,-3.321357610482440076e-02,-7.072771253015849857e-03,1.191310268097639903e-03,-3.971920784793980114e-02,3.430885887772629900e-02,2.993564839653250001e-02,2.791705090337660150e-02
|
||||
7.440129094361959405e-02,-4.464163650698899782e-02,1.145089981388529993e-01,2.875809638242839833e-02,2.457414448561009990e-02,2.499059336410210108e-02,1.918699701745330000e-02,-2.592261998182820038e-03,-6.092541861022970299e-04,-5.219804415301099697e-03
|
||||
-3.820740103798660192e-02,-4.464163650698899782e-02,6.708526688809300642e-02,-6.075654165471439799e-02,-2.908801698423390050e-02,-2.323426975148589965e-02,-1.026610541524320026e-02,-2.592261998182820038e-03,-1.498586820292070049e-03,1.963283707370720027e-02
|
||||
-1.277963188084970010e-02,5.068011873981870252e-02,-5.578530953432969675e-02,-2.227739861197989939e-03,-2.771206412603280031e-02,-2.918409052548700047e-02,1.918699701745330000e-02,-3.949338287409189657e-02,-1.705210460474350029e-02,4.448547856271539702e-02
|
||||
9.015598825267629943e-03,5.068011873981870252e-02,3.043965637614240091e-02,4.252957915737339695e-02,-2.944912678412469915e-03,3.689023491210430272e-02,-6.549067247654929980e-02,7.120997975363539678e-02,-2.364455757213410059e-02,1.549073015887240078e-02
|
||||
8.166636784565869944e-02,5.068011873981870252e-02,-2.560657146566450160e-02,-3.665644679856060184e-02,-7.036660273026780488e-02,-4.640725592391130305e-02,-3.971920784793980114e-02,-2.592261998182820038e-03,-4.118038518800790082e-02,-5.219804415301099697e-03
|
||||
3.081082953138499989e-02,-4.464163650698899782e-02,1.048086894739250069e-01,7.695828609473599757e-02,-1.120062982761920074e-02,-1.133462820348369975e-02,-5.812739686837520292e-02,3.430885887772629900e-02,5.710418744784390155e-02,3.620126473304600273e-02
|
||||
2.717829108036539862e-02,5.068011873981870252e-02,-6.205954135808240159e-03,2.875809638242839833e-02,-1.670444126042380101e-02,-1.627025888008149911e-03,-5.812739686837520292e-02,3.430885887772629900e-02,2.930041326858690010e-02,3.205915781821130212e-02
|
||||
-6.000263174410389727e-02,5.068011873981870252e-02,-4.716281294328249912e-02,-2.288496402361559975e-02,-7.174255558846899528e-02,-5.768060054833450134e-02,-6.584467611156170040e-03,-3.949338287409189657e-02,-6.291294991625119570e-02,-5.492508739331759815e-02
|
||||
5.383060374248070309e-03,-4.464163650698899782e-02,-4.824062501716339796e-02,-1.255635194240680048e-02,1.182945896190920002e-03,-6.637401276640669812e-03,6.336665066649820044e-02,-3.949338287409189657e-02,-5.140053526058249722e-02,-5.906719430815229877e-02
|
||||
-2.004470878288880029e-02,-4.464163650698899782e-02,8.540807214406830050e-02,-3.665644679856060184e-02,9.199583453746550121e-02,8.949917649274570508e-02,-6.180903467246220279e-02,1.450122215054540087e-01,8.094791351127560153e-02,5.276969239238479825e-02
|
||||
1.991321417832630017e-02,5.068011873981870252e-02,-1.267282657909369996e-02,7.007254470726349826e-02,-1.120062982761920074e-02,7.141131042098750048e-03,-3.971920784793980114e-02,3.430885887772629900e-02,5.384369968545729690e-03,3.064409414368320182e-03
|
||||
-6.363517019512339445e-02,-4.464163650698899782e-02,-3.315125598283080038e-02,-3.321357610482440076e-02,1.182945896190920002e-03,2.405114797873349891e-02,-2.499265663159149983e-02,-2.592261998182820038e-03,-2.251217192966049885e-02,-5.906719430815229877e-02
|
||||
2.717829108036539862e-02,-4.464163650698899782e-02,-7.283766209689159811e-03,-5.042792957350569760e-02,7.548440023905199359e-02,5.661858800484489973e-02,3.391354823380159783e-02,-2.592261998182820038e-03,4.344317225278129802e-02,1.549073015887240078e-02
|
||||
-1.641217033186929963e-02,-4.464163650698899782e-02,-1.375063865297449991e-02,1.320442171945160059e-01,-9.824676969418109224e-03,-3.819065120534880214e-03,1.918699701745330000e-02,-3.949338287409189657e-02,-3.581672810154919867e-02,-3.007244590430930078e-02
|
||||
3.081082953138499989e-02,5.068011873981870252e-02,5.954058237092670069e-02,5.630106193231849965e-02,-2.220825269322829892e-02,1.191310268097639903e-03,-3.235593223976569732e-02,-2.592261998182820038e-03,-2.479118743246069845e-02,-1.764612515980519894e-02
|
||||
5.623859868852180283e-02,5.068011873981870252e-02,2.181715978509519982e-02,5.630106193231849965e-02,-7.072771253015849857e-03,1.810132720473240156e-02,-3.235593223976569732e-02,-2.592261998182820038e-03,-2.364455757213410059e-02,2.377494398854190089e-02
|
||||
-2.004470878288880029e-02,-4.464163650698899782e-02,1.858372356345249984e-02,9.072976886968099619e-02,3.934851612593179802e-03,8.706873351046409346e-03,3.759518603788870178e-02,-3.949338287409189657e-02,-5.780006567561250114e-02,7.206516329203029904e-03
|
||||
-1.072256316073579990e-01,-4.464163650698899782e-02,-1.159501450521270051e-02,-4.009931749229690007e-02,4.934129593323050011e-02,6.444729954958319795e-02,-1.394774321933030074e-02,3.430885887772629900e-02,7.026862549151949647e-03,-3.007244590430930078e-02
|
||||
8.166636784565869944e-02,5.068011873981870252e-02,-2.972517914165530208e-03,-3.321357610482440076e-02,4.246153164222479792e-02,5.787118185200299664e-02,-1.026610541524320026e-02,3.430885887772629900e-02,-6.092541861022970299e-04,-1.077697500466389974e-03
|
||||
5.383060374248070309e-03,5.068011873981870252e-02,1.750591148957160101e-02,3.220096707616459941e-02,1.277706088506949944e-01,1.273901403692790091e-01,-2.131101882750449997e-02,7.120997975363539678e-02,6.257518145805600340e-02,1.549073015887240078e-02
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,-2.991781976118810041e-02,-7.452802442965950069e-02,-1.257658268582039982e-02,-1.258722205064180012e-02,4.460445801105040325e-03,-2.592261998182820038e-03,3.711738233435969789e-03,-3.007244590430930078e-02
|
||||
3.081082953138499989e-02,-4.464163650698899782e-02,-2.021751109626000048e-02,-5.670610554934250001e-03,-4.320865536613589623e-03,-2.949723898727649868e-02,7.809320188284639419e-02,-3.949338287409189657e-02,-1.090443584737709956e-02,-1.077697500466389974e-03
|
||||
1.750521923228520000e-03,5.068011873981870252e-02,-5.794093368209150136e-02,-4.354218818603310115e-02,-9.650970703608929835e-02,-4.703355284749029946e-02,-9.862541271333299941e-02,3.430885887772629900e-02,-6.117659509433449883e-02,-7.149351505265640061e-02
|
||||
-2.730978568492789874e-02,5.068011873981870252e-02,6.061839444480759953e-02,1.079441223383619947e-01,1.219056876180000040e-02,-1.759759743927430051e-02,-2.902829807069099918e-03,-2.592261998182820038e-03,7.021129819331020649e-02,1.356118306890790048e-01
|
||||
-8.543040090124079389e-02,5.068011873981870252e-02,-4.069594049999709917e-02,-3.321357610482440076e-02,-8.137422559587689785e-02,-6.958024209633670298e-02,-6.584467611156170040e-03,-3.949338287409189657e-02,-5.780006567561250114e-02,-4.249876664881350324e-02
|
||||
1.264813727628719998e-02,5.068011873981870252e-02,-7.195249064254319316e-02,-4.698505887976939938e-02,-5.110326271545199972e-02,-9.713730673381550107e-02,1.185912177278039964e-01,-7.639450375000099436e-02,-2.028874775162960165e-02,-3.835665973397880263e-02
|
||||
-5.273755484206479882e-02,-4.464163650698899782e-02,-5.578530953432969675e-02,-3.665644679856060184e-02,8.924392882106320368e-02,-3.192768196955810076e-03,8.142083605192099172e-03,3.430885887772629900e-02,1.323726493386760128e-01,3.064409414368320182e-03
|
||||
-2.367724723390840155e-02,5.068011873981870252e-02,4.552902541047500196e-02,2.187235499495579841e-02,1.098832216940800049e-01,8.887287956916670173e-02,7.788079970179680352e-04,3.430885887772629900e-02,7.419253669003070262e-02,6.105390622205419948e-02
|
||||
-7.453278554818210111e-02,5.068011873981870252e-02,-9.439390357450949676e-03,1.498661360748330083e-02,-3.734373413344069942e-02,-2.166852744253820046e-02,-1.394774321933030074e-02,-2.592261998182820038e-03,-3.324878724762579674e-02,1.134862324403770016e-02
|
||||
-5.514554978810590376e-03,5.068011873981870252e-02,-3.315125598283080038e-02,-1.599922263614299983e-02,8.062710187196569719e-03,1.622243643399520069e-02,1.550535921336619952e-02,-2.592261998182820038e-03,-2.832024254799870092e-02,-7.563562196749110123e-02
|
||||
-6.000263174410389727e-02,5.068011873981870252e-02,4.984027370599859730e-02,1.842948430121960079e-02,-1.670444126042380101e-02,-3.012353591085559917e-02,-1.762938102341739949e-02,-2.592261998182820038e-03,4.976865992074899769e-02,-5.906719430815229877e-02
|
||||
-2.004470878288880029e-02,-4.464163650698899782e-02,-8.488623552911400694e-02,-2.632783471735180084e-02,-3.596778127523959923e-02,-3.419446591411950259e-02,4.127682384197570165e-02,-5.167075276314189725e-02,-8.238148325810279449e-02,-4.664087356364819692e-02
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,5.649978676881649634e-03,3.220096707616459941e-02,6.686757328995440036e-03,1.747503028115330106e-02,-2.499265663159149983e-02,3.430885887772629900e-02,1.482271084126630077e-02,6.105390622205419948e-02
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,2.073934771121430098e-02,2.187235499495579841e-02,-1.395253554402150001e-02,-1.321351897422090062e-02,-6.584467611156170040e-03,-2.592261998182820038e-03,1.331596790892770020e-02,4.034337164788070335e-02
|
||||
4.170844488444359899e-02,-4.464163650698899782e-02,-7.283766209689159811e-03,2.875809638242839833e-02,-4.284754556624519733e-02,-4.828614669464850045e-02,5.232173725423699961e-02,-7.639450375000099436e-02,-7.212845460195599356e-02,2.377494398854190089e-02
|
||||
1.991321417832630017e-02,5.068011873981870252e-02,1.048086894739250069e-01,7.007254470726349826e-02,-3.596778127523959923e-02,-2.667890283117069911e-02,-2.499265663159149983e-02,-2.592261998182820038e-03,3.711738233435969789e-03,4.034337164788070335e-02
|
||||
-4.910501639104519755e-02,5.068011873981870252e-02,-2.452875939178359929e-02,6.750727943574620551e-05,-4.697540414084860200e-02,-2.824464514011839830e-02,-6.549067247654929980e-02,2.840467953758080144e-02,1.919903307856710151e-02,1.134862324403770016e-02
|
||||
1.750521923228520000e-03,5.068011873981870252e-02,-6.205954135808240159e-03,-1.944209332987930153e-02,-9.824676969418109224e-03,4.949091809572019746e-03,-3.971920784793980114e-02,3.430885887772629900e-02,1.482271084126630077e-02,9.833286845556660216e-02
|
||||
3.444336798240450054e-02,-4.464163650698899782e-02,-3.854031635223530150e-02,-1.255635194240680048e-02,9.438663045397699403e-03,5.262240271361550044e-03,-6.584467611156170040e-03,-2.592261998182820038e-03,3.119299070280229930e-02,9.833286845556660216e-02
|
||||
-4.547247794002570037e-02,5.068011873981870252e-02,1.371430516903520136e-01,-1.599922263614299983e-02,4.108557878402369773e-02,3.187985952347179713e-02,-4.340084565202689815e-02,7.120997975363539678e-02,7.102157794598219775e-02,4.862758547755009764e-02
|
||||
-9.147093429830140468e-03,5.068011873981870252e-02,1.705552259806600024e-01,1.498661360748330083e-02,3.007795591841460128e-02,3.375875029420900147e-02,-2.131101882750449997e-02,3.430885887772629900e-02,3.365681290238470291e-02,3.205915781821130212e-02
|
||||
-1.641217033186929963e-02,5.068011873981870252e-02,2.416542455238970041e-03,1.498661360748330083e-02,2.182223876920789951e-02,-1.008203435632550049e-02,-2.499265663159149983e-02,3.430885887772629900e-02,8.553312118743899850e-02,8.176444079622779970e-02
|
||||
-9.147093429830140468e-03,-4.464163650698899782e-02,3.798434089330870317e-02,-4.009931749229690007e-02,-2.496015840963049931e-02,-3.819065120534880214e-03,-4.340084565202689815e-02,1.585829843977170153e-02,-5.145307980263110273e-03,2.791705090337660150e-02
|
||||
1.991321417832630017e-02,-4.464163650698899782e-02,-5.794093368209150136e-02,-5.731367096097819691e-02,-1.568959820211340015e-03,-1.258722205064180012e-02,7.441156407875940126e-02,-3.949338287409189657e-02,-6.117659509433449883e-02,-7.563562196749110123e-02
|
||||
5.260606023750229870e-02,5.068011873981870252e-02,-9.439390357450949676e-03,4.941532054484590319e-02,5.071724879143160031e-02,-1.916333974822199970e-02,-1.394774321933030074e-02,3.430885887772629900e-02,1.193439942037869961e-01,-1.764612515980519894e-02
|
||||
-2.730978568492789874e-02,5.068011873981870252e-02,-2.345094731790270046e-02,-1.599922263614299983e-02,1.356652162000110060e-02,1.277780335431030062e-02,2.655027262562750096e-02,-2.592261998182820038e-03,-1.090443584737709956e-02,-2.178823207463989955e-02
|
||||
-7.453278554818210111e-02,-4.464163650698899782e-02,-1.051720243133190055e-02,-5.670610554934250001e-03,-6.623874415566440021e-02,-5.705430362475540085e-02,-2.902829807069099918e-03,-3.949338287409189657e-02,-4.257210492279420166e-02,-1.077697500466389974e-03
|
||||
-1.072256316073579990e-01,-4.464163650698899782e-02,-3.422906805671169922e-02,-6.764228304218700139e-02,-6.348683843926219983e-02,-7.051968748170529822e-02,8.142083605192099172e-03,-3.949338287409189657e-02,-6.092541861022970299e-04,-7.977772888232589898e-02
|
||||
4.534098333546320025e-02,5.068011873981870252e-02,-2.972517914165530208e-03,1.079441223383619947e-01,3.558176735121919981e-02,2.248540566978590033e-02,2.655027262562750096e-02,-2.592261998182820038e-03,2.801650652326400162e-02,1.963283707370720027e-02
|
||||
-1.882016527791040067e-03,-4.464163650698899782e-02,6.816307896197400240e-02,-5.670610554934250001e-03,1.195148917014880047e-01,1.302084765253850029e-01,-2.499265663159149983e-02,8.670845052151719690e-02,4.613233103941480340e-02,-1.077697500466389974e-03
|
||||
1.991321417832630017e-02,5.068011873981870252e-02,9.961226972405269262e-03,1.842948430121960079e-02,1.494247447820220079e-02,4.471894645684260094e-02,-6.180903467246220279e-02,7.120997975363539678e-02,9.436409146079870192e-03,-6.320930122298699938e-02
|
||||
1.628067572730669890e-02,5.068011873981870252e-02,2.416542455238970041e-03,-5.670610554934250001e-03,-5.696818394814720174e-03,1.089891258357309975e-02,-5.076412126020100196e-02,3.430885887772629900e-02,2.269202256674450122e-02,-3.835665973397880263e-02
|
||||
-1.882016527791040067e-03,-4.464163650698899782e-02,-3.854031635223530150e-02,2.187235499495579841e-02,-1.088932827598989989e-01,-1.156130659793979942e-01,2.286863482154040048e-02,-7.639450375000099436e-02,-4.687948284421659950e-02,2.377494398854190089e-02
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,2.612840808061879863e-02,5.859630917623830093e-02,-6.073493272285990230e-02,-4.421521669138449989e-02,-1.394774321933030074e-02,-3.395821474270550172e-02,-5.140053526058249722e-02,-2.593033898947460017e-02
|
||||
-7.090024709716259699e-02,5.068011873981870252e-02,-8.919748382463760228e-02,-7.452802442965950069e-02,-4.284754556624519733e-02,-2.573945744580210040e-02,-3.235593223976569732e-02,-2.592261998182820038e-03,-1.290794225416879923e-02,-5.492508739331759815e-02
|
||||
4.897352178648269744e-02,-4.464163650698899782e-02,6.061839444480759953e-02,-2.288496402361559975e-02,-2.358420555142939912e-02,-7.271172671423199729e-02,-4.340084565202689815e-02,-2.592261998182820038e-03,1.041376113589790042e-01,3.620126473304600273e-02
|
||||
5.383060374248070309e-03,5.068011873981870252e-02,-2.884000768730720157e-02,-9.113481248670509197e-03,-3.183992270063620150e-02,-2.887094206369749880e-02,8.142083605192099172e-03,-3.949338287409189657e-02,-1.811826730789670159e-02,7.206516329203029904e-03
|
||||
3.444336798240450054e-02,5.068011873981870252e-02,-2.991781976118810041e-02,4.658001526274530187e-03,9.337178739566659447e-02,8.699398879842949739e-02,3.391354823380159783e-02,-2.592261998182820038e-03,2.405258322689299982e-02,-3.835665973397880263e-02
|
||||
2.354575262934580082e-02,5.068011873981870252e-02,-1.913969902237900103e-02,4.941532054484590319e-02,-6.348683843926219983e-02,-6.112523362801929733e-02,4.460445801105040325e-03,-3.949338287409189657e-02,-2.595242443518940012e-02,-1.350401824497050006e-02
|
||||
1.991321417832630017e-02,-4.464163650698899782e-02,-4.069594049999709917e-02,-1.599922263614299983e-02,-8.448724111216979540e-03,-1.759759743927430051e-02,5.232173725423699961e-02,-3.949338287409189657e-02,-3.075120986455629965e-02,3.064409414368320182e-03
|
||||
-4.547247794002570037e-02,-4.464163650698899782e-02,1.535028734180979987e-02,-7.452802442965950069e-02,-4.972730985725089953e-02,-1.728444897748479883e-02,-2.867429443567860031e-02,-2.592261998182820038e-03,-1.043648208321659998e-01,-7.563562196749110123e-02
|
||||
5.260606023750229870e-02,5.068011873981870252e-02,-2.452875939178359929e-02,5.630106193231849965e-02,-7.072771253015849857e-03,-5.071658967693000106e-03,-2.131101882750449997e-02,-2.592261998182820038e-03,2.671425763351279944e-02,-3.835665973397880263e-02
|
||||
-5.514554978810590376e-03,5.068011873981870252e-02,1.338730381358059929e-03,-8.485663651086830517e-02,-1.120062982761920074e-02,-1.665815205390569834e-02,4.864009945014990260e-02,-3.949338287409189657e-02,-4.118038518800790082e-02,-8.806194271199530021e-02
|
||||
9.015598825267629943e-03,5.068011873981870252e-02,6.924089103585480409e-02,5.974393262605470073e-02,1.769438019460449832e-02,-2.323426975148589965e-02,-4.708248345611389801e-02,3.430885887772629900e-02,1.032922649115240038e-01,7.348022696655839847e-02
|
||||
-2.367724723390840155e-02,-4.464163650698899782e-02,-6.979686649478139548e-02,-6.419941234845069622e-02,-5.935897986465880211e-02,-5.047818592717519953e-02,1.918699701745330000e-02,-3.949338287409189657e-02,-8.913686007934769340e-02,-5.078298047848289754e-02
|
||||
-4.183993948900609910e-02,5.068011873981870252e-02,-2.991781976118810041e-02,-2.227739861197989939e-03,2.182223876920789951e-02,3.657708645031480105e-02,1.182372140927919965e-02,-2.592261998182820038e-03,-4.118038518800790082e-02,6.519601313688899724e-02
|
||||
-7.453278554818210111e-02,-4.464163650698899782e-02,-4.608500086940160029e-02,-4.354218818603310115e-02,-2.908801698423390050e-02,-2.323426975148589965e-02,1.550535921336619952e-02,-3.949338287409189657e-02,-3.980959436433750137e-02,-2.178823207463989955e-02
|
||||
3.444336798240450054e-02,-4.464163650698899782e-02,1.858372356345249984e-02,5.630106193231849965e-02,1.219056876180000040e-02,-5.454911593043910295e-02,-6.917231028063640375e-02,7.120997975363539678e-02,1.300806095217529879e-01,7.206516329203029904e-03
|
||||
-6.000263174410389727e-02,-4.464163650698899782e-02,1.338730381358059929e-03,-2.977070541108809906e-02,-7.072771253015849857e-03,-2.166852744253820046e-02,1.182372140927919965e-02,-2.592261998182820038e-03,3.181521750079859684e-02,-5.492508739331759815e-02
|
||||
-8.543040090124079389e-02,5.068011873981870252e-02,-3.099563183506899924e-02,-2.288496402361559975e-02,-6.348683843926219983e-02,-5.423596746864960128e-02,1.918699701745330000e-02,-3.949338287409189657e-02,-9.643322289178400675e-02,-3.421455281914410201e-02
|
||||
5.260606023750229870e-02,-4.464163650698899782e-02,-4.050329988046450294e-03,-3.091832896419060075e-02,-4.697540414084860200e-02,-5.830689747191349775e-02,-1.394774321933030074e-02,-2.583996815000549896e-02,3.605579008983190309e-02,2.377494398854190089e-02
|
||||
1.264813727628719998e-02,-4.464163650698899782e-02,1.535028734180979987e-02,-3.321357610482440076e-02,4.108557878402369773e-02,3.219300798526129881e-02,-2.902829807069099918e-03,-2.592261998182820038e-03,4.506616833626150148e-02,-6.735140813782170000e-02
|
||||
5.987113713954139715e-02,5.068011873981870252e-02,2.289497185897609866e-02,4.941532054484590319e-02,1.631842733640340160e-02,1.183835796894170019e-02,-1.394774321933030074e-02,-2.592261998182820038e-03,3.953987807202419963e-02,1.963283707370720027e-02
|
||||
-2.367724723390840155e-02,-4.464163650698899782e-02,4.552902541047500196e-02,9.072976886968099619e-02,-1.808039411862490120e-02,-3.544705976127759950e-02,7.072992627467229731e-02,-3.949338287409189657e-02,-3.452371533034950118e-02,-9.361911330135799444e-03
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,-4.500718879552070145e-02,-5.731367096097819691e-02,-3.459182841703849903e-02,-5.392281900686000246e-02,7.441156407875940126e-02,-7.639450375000099436e-02,-4.257210492279420166e-02,4.034337164788070335e-02
|
||||
1.107266754538149961e-01,5.068011873981870252e-02,-3.315125598283080038e-02,-2.288496402361559975e-02,-4.320865536613589623e-03,2.029336643725910064e-02,-6.180903467246220279e-02,7.120997975363539678e-02,1.556684454070180086e-02,4.448547856271539702e-02
|
||||
-2.004470878288880029e-02,-4.464163650698899782e-02,9.726400495675820157e-02,-5.670610554934250001e-03,-5.696818394814720174e-03,-2.386056667506489953e-02,-2.131101882750449997e-02,-2.592261998182820038e-03,6.168584882386619894e-02,4.034337164788070335e-02
|
||||
-1.641217033186929963e-02,-4.464163650698899782e-02,5.415152200152219958e-02,7.007254470726349826e-02,-3.321587555883730170e-02,-2.793149667832890010e-02,8.142083605192099172e-03,-3.949338287409189657e-02,-2.712864555432650121e-02,-9.361911330135799444e-03
|
||||
4.897352178648269744e-02,5.068011873981870252e-02,1.231314947298999957e-01,8.384402748220859403e-02,-1.047654241852959967e-01,-1.008950882752900069e-01,-6.917231028063640375e-02,-2.592261998182820038e-03,3.664579779339879884e-02,-3.007244590430930078e-02
|
||||
-5.637009329308430294e-02,-4.464163650698899782e-02,-8.057498723359039772e-02,-8.485663651086830517e-02,-3.734373413344069942e-02,-3.701280207022530216e-02,3.391354823380159783e-02,-3.949338287409189657e-02,-5.615757309500619965e-02,-1.377672256900120129e-01
|
||||
2.717829108036539862e-02,-4.464163650698899782e-02,9.295275666123460623e-02,-5.272317671413939699e-02,8.062710187196569719e-03,3.970857106821010230e-02,-2.867429443567860031e-02,2.102445536239900062e-02,-4.836172480289190057e-02,1.963283707370720027e-02
|
||||
6.350367559056099842e-02,-4.464163650698899782e-02,-5.039624916492520257e-02,1.079441223383619947e-01,3.145390877661580209e-02,1.935392105189049847e-02,-1.762938102341739949e-02,2.360753382371260159e-02,5.803912766389510147e-02,4.034337164788070335e-02
|
||||
-5.273755484206479882e-02,5.068011873981870252e-02,-1.159501450521270051e-02,5.630106193231849965e-02,5.622106022423609822e-02,7.290230801790049953e-02,-3.971920784793980114e-02,7.120997975363539678e-02,3.056648739841480097e-02,-5.219804415301099697e-03
|
||||
-9.147093429830140468e-03,5.068011873981870252e-02,-2.776219561342629927e-02,8.100872220010799790e-03,4.796534307502930278e-02,3.720338337389379746e-02,-2.867429443567860031e-02,3.430885887772629900e-02,6.604820616309839409e-02,-4.249876664881350324e-02
|
||||
5.383060374248070309e-03,-4.464163650698899782e-02,5.846277029704580186e-02,-4.354218818603310115e-02,-7.311850844667000526e-02,-7.239857825244250256e-02,1.918699701745330000e-02,-7.639450375000099436e-02,-5.140053526058249722e-02,-2.593033898947460017e-02
|
||||
7.440129094361959405e-02,-4.464163650698899782e-02,8.540807214406830050e-02,6.318680331979099896e-02,1.494247447820220079e-02,1.309095181609989944e-02,1.550535921336619952e-02,-2.592261998182820038e-03,6.209315616505399656e-03,8.590654771106250032e-02
|
||||
-5.273755484206479882e-02,-4.464163650698899782e-02,-8.168937664037369826e-04,-2.632783471735180084e-02,1.081461590359879960e-02,7.141131042098750048e-03,4.864009945014990260e-02,-3.949338287409189657e-02,-3.581672810154919867e-02,1.963283707370720027e-02
|
||||
8.166636784565869944e-02,5.068011873981870252e-02,6.727790750762559745e-03,-4.522987001831730094e-03,1.098832216940800049e-01,1.170562411302250028e-01,-3.235593223976569732e-02,9.187460744414439884e-02,5.472400334817909689e-02,7.206516329203029904e-03
|
||||
-5.514554978810590376e-03,-4.464163650698899782e-02,8.883414898524360018e-03,-5.042792957350569760e-02,2.595009734381130070e-02,4.722413415115889884e-02,-4.340084565202689815e-02,7.120997975363539678e-02,1.482271084126630077e-02,3.064409414368320182e-03
|
||||
-2.730978568492789874e-02,-4.464163650698899782e-02,8.001901177466380632e-02,9.876313370696999938e-02,-2.944912678412469915e-03,1.810132720473240156e-02,-1.762938102341739949e-02,3.311917341962639788e-03,-2.952762274177360077e-02,3.620126473304600273e-02
|
||||
-5.273755484206479882e-02,-4.464163650698899782e-02,7.139651518361660176e-02,-7.452802442965950069e-02,-1.532848840222260020e-02,-1.313877426218630021e-03,4.460445801105040325e-03,-2.141183364489639834e-02,-4.687948284421659950e-02,3.064409414368320182e-03
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,-2.452875939178359929e-02,-2.632783471735180084e-02,9.887559882847110626e-02,9.419640341958869512e-02,7.072992627467229731e-02,-2.592261998182820038e-03,-2.139368094035999993e-02,7.206516329203029904e-03
|
||||
-2.004470878288880029e-02,-4.464163650698899782e-02,-5.470749746044879791e-02,-5.387080026724189868e-02,-6.623874415566440021e-02,-5.736745208654490252e-02,1.182372140927919965e-02,-3.949338287409189657e-02,-7.408887149153539631e-02,-5.219804415301099697e-03
|
||||
2.354575262934580082e-02,-4.464163650698899782e-02,-3.638469220447349689e-02,6.750727943574620551e-05,1.182945896190920002e-03,3.469819567957759671e-02,-4.340084565202689815e-02,3.430885887772629900e-02,-3.324878724762579674e-02,6.105390622205419948e-02
|
||||
3.807590643342410180e-02,5.068011873981870252e-02,1.642809941569069870e-02,2.187235499495579841e-02,3.970962592582259754e-02,4.503209491863210262e-02,-4.340084565202689815e-02,7.120997975363539678e-02,4.976865992074899769e-02,1.549073015887240078e-02
|
||||
-7.816532399920170238e-02,5.068011873981870252e-02,7.786338762690199478e-02,5.285819123858220142e-02,7.823630595545419397e-02,6.444729954958319795e-02,2.655027262562750096e-02,-2.592261998182820038e-03,4.067226371449769728e-02,-9.361911330135799444e-03
|
||||
9.015598825267629943e-03,5.068011873981870252e-02,-3.961812842611620034e-02,2.875809638242839833e-02,3.833367306762140020e-02,7.352860494147960002e-02,-7.285394808472339667e-02,1.081111006295440019e-01,1.556684454070180086e-02,-4.664087356364819692e-02
|
||||
1.750521923228520000e-03,5.068011873981870252e-02,1.103903904628619932e-02,-1.944209332987930153e-02,-1.670444126042380101e-02,-3.819065120534880214e-03,-4.708248345611389801e-02,3.430885887772629900e-02,2.405258322689299982e-02,2.377494398854190089e-02
|
||||
-7.816532399920170238e-02,-4.464163650698899782e-02,-4.069594049999709917e-02,-8.141376581713200000e-02,-1.006375656106929944e-01,-1.127947298232920004e-01,2.286863482154040048e-02,-7.639450375000099436e-02,-2.028874775162960165e-02,-5.078298047848289754e-02
|
||||
3.081082953138499989e-02,5.068011873981870252e-02,-3.422906805671169922e-02,4.367720260718979675e-02,5.759701308243719842e-02,6.883137801463659611e-02,-3.235593223976569732e-02,5.755656502954899917e-02,3.546193866076970125e-02,8.590654771106250032e-02
|
||||
-3.457486258696700065e-02,5.068011873981870252e-02,5.649978676881649634e-03,-5.670610554934250001e-03,-7.311850844667000526e-02,-6.269097593696699999e-02,-6.584467611156170040e-03,-3.949338287409189657e-02,-4.542095777704099890e-02,3.205915781821130212e-02
|
||||
4.897352178648269744e-02,5.068011873981870252e-02,8.864150836571099701e-02,8.728689817594480205e-02,3.558176735121919981e-02,2.154596028441720101e-02,-2.499265663159149983e-02,3.430885887772629900e-02,6.604820616309839409e-02,1.314697237742440128e-01
|
||||
-4.183993948900609910e-02,-4.464163650698899782e-02,-3.315125598283080038e-02,-2.288496402361559975e-02,4.658939021682820258e-02,4.158746183894729970e-02,5.600337505832399948e-02,-2.473293452372829840e-02,-2.595242443518940012e-02,-3.835665973397880263e-02
|
||||
-9.147093429830140468e-03,-4.464163650698899782e-02,-5.686312160821060252e-02,-5.042792957350569760e-02,2.182223876920789951e-02,4.534524338042170144e-02,-2.867429443567860031e-02,3.430885887772629900e-02,-9.918957363154769225e-03,-1.764612515980519894e-02
|
||||
7.076875249260000666e-02,5.068011873981870252e-02,-3.099563183506899924e-02,2.187235499495579841e-02,-3.734373413344069942e-02,-4.703355284749029946e-02,3.391354823380159783e-02,-3.949338287409189657e-02,-1.495647502491130078e-02,-1.077697500466389974e-03
|
||||
9.015598825267629943e-03,-4.464163650698899782e-02,5.522933407540309841e-02,-5.670610554934250001e-03,5.759701308243719842e-02,4.471894645684260094e-02,-2.902829807069099918e-03,2.323852261495349888e-02,5.568354770267369691e-02,1.066170822852360034e-01
|
||||
-2.730978568492789874e-02,-4.464163650698899782e-02,-6.009655782985329903e-02,-2.977070541108809906e-02,4.658939021682820258e-02,1.998021797546959896e-02,1.222728555318910032e-01,-3.949338287409189657e-02,-5.140053526058249722e-02,-9.361911330135799444e-03
|
||||
1.628067572730669890e-02,-4.464163650698899782e-02,1.338730381358059929e-03,8.100872220010799790e-03,5.310804470794310353e-03,1.089891258357309975e-02,3.023191042971450082e-02,-3.949338287409189657e-02,-4.542095777704099890e-02,3.205915781821130212e-02
|
||||
-1.277963188084970010e-02,-4.464163650698899782e-02,-2.345094731790270046e-02,-4.009931749229690007e-02,-1.670444126042380101e-02,4.635943347782499856e-03,-1.762938102341739949e-02,-2.592261998182820038e-03,-3.845911230135379971e-02,-3.835665973397880263e-02
|
||||
-5.637009329308430294e-02,-4.464163650698899782e-02,-7.410811479030500470e-02,-5.042792957350569760e-02,-2.496015840963049931e-02,-4.703355284749029946e-02,9.281975309919469896e-02,-7.639450375000099436e-02,-6.117659509433449883e-02,-4.664087356364819692e-02
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,1.966153563733339868e-02,5.974393262605470073e-02,-5.696818394814720174e-03,-2.566471273376759888e-03,-2.867429443567860031e-02,-2.592261998182820038e-03,3.119299070280229930e-02,7.206516329203029904e-03
|
||||
-5.514554978810590376e-03,5.068011873981870252e-02,-1.590626280073640167e-02,-6.764228304218700139e-02,4.934129593323050011e-02,7.916527725369119917e-02,-2.867429443567860031e-02,3.430885887772629900e-02,-1.811826730789670159e-02,4.448547856271539702e-02
|
||||
4.170844488444359899e-02,5.068011873981870252e-02,-1.590626280073640167e-02,1.728186074811709910e-02,-3.734373413344069942e-02,-1.383981589779990050e-02,-2.499265663159149983e-02,-1.107951979964190078e-02,-4.687948284421659950e-02,1.549073015887240078e-02
|
||||
-4.547247794002570037e-02,-4.464163650698899782e-02,3.906215296718960200e-02,1.215130832538269907e-03,1.631842733640340160e-02,1.528299104862660025e-02,-2.867429443567860031e-02,2.655962349378539894e-02,4.452837402140529671e-02,-2.593033898947460017e-02
|
||||
-4.547247794002570037e-02,-4.464163650698899782e-02,-7.303030271642410587e-02,-8.141376581713200000e-02,8.374011738825870577e-02,2.780892952020790065e-02,1.738157847891100005e-01,-3.949338287409189657e-02,-4.219859706946029777e-03,3.064409414368320182e-03
|
||||
|
@@ -1,442 +0,0 @@
|
||||
1.510000000000000000e+02
|
||||
7.500000000000000000e+01
|
||||
1.410000000000000000e+02
|
||||
2.060000000000000000e+02
|
||||
1.350000000000000000e+02
|
||||
9.700000000000000000e+01
|
||||
1.380000000000000000e+02
|
||||
6.300000000000000000e+01
|
||||
1.100000000000000000e+02
|
||||
3.100000000000000000e+02
|
||||
1.010000000000000000e+02
|
||||
6.900000000000000000e+01
|
||||
1.790000000000000000e+02
|
||||
1.850000000000000000e+02
|
||||
1.180000000000000000e+02
|
||||
1.710000000000000000e+02
|
||||
1.660000000000000000e+02
|
||||
1.440000000000000000e+02
|
||||
9.700000000000000000e+01
|
||||
1.680000000000000000e+02
|
||||
6.800000000000000000e+01
|
||||
4.900000000000000000e+01
|
||||
6.800000000000000000e+01
|
||||
2.450000000000000000e+02
|
||||
1.840000000000000000e+02
|
||||
2.020000000000000000e+02
|
||||
1.370000000000000000e+02
|
||||
8.500000000000000000e+01
|
||||
1.310000000000000000e+02
|
||||
2.830000000000000000e+02
|
||||
1.290000000000000000e+02
|
||||
5.900000000000000000e+01
|
||||
3.410000000000000000e+02
|
||||
8.700000000000000000e+01
|
||||
6.500000000000000000e+01
|
||||
1.020000000000000000e+02
|
||||
2.650000000000000000e+02
|
||||
2.760000000000000000e+02
|
||||
2.520000000000000000e+02
|
||||
9.000000000000000000e+01
|
||||
1.000000000000000000e+02
|
||||
5.500000000000000000e+01
|
||||
6.100000000000000000e+01
|
||||
9.200000000000000000e+01
|
||||
2.590000000000000000e+02
|
||||
5.300000000000000000e+01
|
||||
1.900000000000000000e+02
|
||||
1.420000000000000000e+02
|
||||
7.500000000000000000e+01
|
||||
1.420000000000000000e+02
|
||||
1.550000000000000000e+02
|
||||
2.250000000000000000e+02
|
||||
5.900000000000000000e+01
|
||||
1.040000000000000000e+02
|
||||
1.820000000000000000e+02
|
||||
1.280000000000000000e+02
|
||||
5.200000000000000000e+01
|
||||
3.700000000000000000e+01
|
||||
1.700000000000000000e+02
|
||||
1.700000000000000000e+02
|
||||
6.100000000000000000e+01
|
||||
1.440000000000000000e+02
|
||||
5.200000000000000000e+01
|
||||
1.280000000000000000e+02
|
||||
7.100000000000000000e+01
|
||||
1.630000000000000000e+02
|
||||
1.500000000000000000e+02
|
||||
9.700000000000000000e+01
|
||||
1.600000000000000000e+02
|
||||
1.780000000000000000e+02
|
||||
4.800000000000000000e+01
|
||||
2.700000000000000000e+02
|
||||
2.020000000000000000e+02
|
||||
1.110000000000000000e+02
|
||||
8.500000000000000000e+01
|
||||
4.200000000000000000e+01
|
||||
1.700000000000000000e+02
|
||||
2.000000000000000000e+02
|
||||
2.520000000000000000e+02
|
||||
1.130000000000000000e+02
|
||||
1.430000000000000000e+02
|
||||
5.100000000000000000e+01
|
||||
5.200000000000000000e+01
|
||||
2.100000000000000000e+02
|
||||
6.500000000000000000e+01
|
||||
1.410000000000000000e+02
|
||||
5.500000000000000000e+01
|
||||
1.340000000000000000e+02
|
||||
4.200000000000000000e+01
|
||||
1.110000000000000000e+02
|
||||
9.800000000000000000e+01
|
||||
1.640000000000000000e+02
|
||||
4.800000000000000000e+01
|
||||
9.600000000000000000e+01
|
||||
9.000000000000000000e+01
|
||||
1.620000000000000000e+02
|
||||
1.500000000000000000e+02
|
||||
2.790000000000000000e+02
|
||||
9.200000000000000000e+01
|
||||
8.300000000000000000e+01
|
||||
1.280000000000000000e+02
|
||||
1.020000000000000000e+02
|
||||
3.020000000000000000e+02
|
||||
1.980000000000000000e+02
|
||||
9.500000000000000000e+01
|
||||
5.300000000000000000e+01
|
||||
1.340000000000000000e+02
|
||||
1.440000000000000000e+02
|
||||
2.320000000000000000e+02
|
||||
8.100000000000000000e+01
|
||||
1.040000000000000000e+02
|
||||
5.900000000000000000e+01
|
||||
2.460000000000000000e+02
|
||||
2.970000000000000000e+02
|
||||
2.580000000000000000e+02
|
||||
2.290000000000000000e+02
|
||||
2.750000000000000000e+02
|
||||
2.810000000000000000e+02
|
||||
1.790000000000000000e+02
|
||||
2.000000000000000000e+02
|
||||
2.000000000000000000e+02
|
||||
1.730000000000000000e+02
|
||||
1.800000000000000000e+02
|
||||
8.400000000000000000e+01
|
||||
1.210000000000000000e+02
|
||||
1.610000000000000000e+02
|
||||
9.900000000000000000e+01
|
||||
1.090000000000000000e+02
|
||||
1.150000000000000000e+02
|
||||
2.680000000000000000e+02
|
||||
2.740000000000000000e+02
|
||||
1.580000000000000000e+02
|
||||
1.070000000000000000e+02
|
||||
8.300000000000000000e+01
|
||||
1.030000000000000000e+02
|
||||
2.720000000000000000e+02
|
||||
8.500000000000000000e+01
|
||||
2.800000000000000000e+02
|
||||
3.360000000000000000e+02
|
||||
2.810000000000000000e+02
|
||||
1.180000000000000000e+02
|
||||
3.170000000000000000e+02
|
||||
2.350000000000000000e+02
|
||||
6.000000000000000000e+01
|
||||
1.740000000000000000e+02
|
||||
2.590000000000000000e+02
|
||||
1.780000000000000000e+02
|
||||
1.280000000000000000e+02
|
||||
9.600000000000000000e+01
|
||||
1.260000000000000000e+02
|
||||
2.880000000000000000e+02
|
||||
8.800000000000000000e+01
|
||||
2.920000000000000000e+02
|
||||
7.100000000000000000e+01
|
||||
1.970000000000000000e+02
|
||||
1.860000000000000000e+02
|
||||
2.500000000000000000e+01
|
||||
8.400000000000000000e+01
|
||||
9.600000000000000000e+01
|
||||
1.950000000000000000e+02
|
||||
5.300000000000000000e+01
|
||||
2.170000000000000000e+02
|
||||
1.720000000000000000e+02
|
||||
1.310000000000000000e+02
|
||||
2.140000000000000000e+02
|
||||
5.900000000000000000e+01
|
||||
7.000000000000000000e+01
|
||||
2.200000000000000000e+02
|
||||
2.680000000000000000e+02
|
||||
1.520000000000000000e+02
|
||||
4.700000000000000000e+01
|
||||
7.400000000000000000e+01
|
||||
2.950000000000000000e+02
|
||||
1.010000000000000000e+02
|
||||
1.510000000000000000e+02
|
||||
1.270000000000000000e+02
|
||||
2.370000000000000000e+02
|
||||
2.250000000000000000e+02
|
||||
8.100000000000000000e+01
|
||||
1.510000000000000000e+02
|
||||
1.070000000000000000e+02
|
||||
6.400000000000000000e+01
|
||||
1.380000000000000000e+02
|
||||
1.850000000000000000e+02
|
||||
2.650000000000000000e+02
|
||||
1.010000000000000000e+02
|
||||
1.370000000000000000e+02
|
||||
1.430000000000000000e+02
|
||||
1.410000000000000000e+02
|
||||
7.900000000000000000e+01
|
||||
2.920000000000000000e+02
|
||||
1.780000000000000000e+02
|
||||
9.100000000000000000e+01
|
||||
1.160000000000000000e+02
|
||||
8.600000000000000000e+01
|
||||
1.220000000000000000e+02
|
||||
7.200000000000000000e+01
|
||||
1.290000000000000000e+02
|
||||
1.420000000000000000e+02
|
||||
9.000000000000000000e+01
|
||||
1.580000000000000000e+02
|
||||
3.900000000000000000e+01
|
||||
1.960000000000000000e+02
|
||||
2.220000000000000000e+02
|
||||
2.770000000000000000e+02
|
||||
9.900000000000000000e+01
|
||||
1.960000000000000000e+02
|
||||
2.020000000000000000e+02
|
||||
1.550000000000000000e+02
|
||||
7.700000000000000000e+01
|
||||
1.910000000000000000e+02
|
||||
7.000000000000000000e+01
|
||||
7.300000000000000000e+01
|
||||
4.900000000000000000e+01
|
||||
6.500000000000000000e+01
|
||||
2.630000000000000000e+02
|
||||
2.480000000000000000e+02
|
||||
2.960000000000000000e+02
|
||||
2.140000000000000000e+02
|
||||
1.850000000000000000e+02
|
||||
7.800000000000000000e+01
|
||||
9.300000000000000000e+01
|
||||
2.520000000000000000e+02
|
||||
1.500000000000000000e+02
|
||||
7.700000000000000000e+01
|
||||
2.080000000000000000e+02
|
||||
7.700000000000000000e+01
|
||||
1.080000000000000000e+02
|
||||
1.600000000000000000e+02
|
||||
5.300000000000000000e+01
|
||||
2.200000000000000000e+02
|
||||
1.540000000000000000e+02
|
||||
2.590000000000000000e+02
|
||||
9.000000000000000000e+01
|
||||
2.460000000000000000e+02
|
||||
1.240000000000000000e+02
|
||||
6.700000000000000000e+01
|
||||
7.200000000000000000e+01
|
||||
2.570000000000000000e+02
|
||||
2.620000000000000000e+02
|
||||
2.750000000000000000e+02
|
||||
1.770000000000000000e+02
|
||||
7.100000000000000000e+01
|
||||
4.700000000000000000e+01
|
||||
1.870000000000000000e+02
|
||||
1.250000000000000000e+02
|
||||
7.800000000000000000e+01
|
||||
5.100000000000000000e+01
|
||||
2.580000000000000000e+02
|
||||
2.150000000000000000e+02
|
||||
3.030000000000000000e+02
|
||||
2.430000000000000000e+02
|
||||
9.100000000000000000e+01
|
||||
1.500000000000000000e+02
|
||||
3.100000000000000000e+02
|
||||
1.530000000000000000e+02
|
||||
3.460000000000000000e+02
|
||||
6.300000000000000000e+01
|
||||
8.900000000000000000e+01
|
||||
5.000000000000000000e+01
|
||||
3.900000000000000000e+01
|
||||
1.030000000000000000e+02
|
||||
3.080000000000000000e+02
|
||||
1.160000000000000000e+02
|
||||
1.450000000000000000e+02
|
||||
7.400000000000000000e+01
|
||||
4.500000000000000000e+01
|
||||
1.150000000000000000e+02
|
||||
2.640000000000000000e+02
|
||||
8.700000000000000000e+01
|
||||
2.020000000000000000e+02
|
||||
1.270000000000000000e+02
|
||||
1.820000000000000000e+02
|
||||
2.410000000000000000e+02
|
||||
6.600000000000000000e+01
|
||||
9.400000000000000000e+01
|
||||
2.830000000000000000e+02
|
||||
6.400000000000000000e+01
|
||||
1.020000000000000000e+02
|
||||
2.000000000000000000e+02
|
||||
2.650000000000000000e+02
|
||||
9.400000000000000000e+01
|
||||
2.300000000000000000e+02
|
||||
1.810000000000000000e+02
|
||||
1.560000000000000000e+02
|
||||
2.330000000000000000e+02
|
||||
6.000000000000000000e+01
|
||||
2.190000000000000000e+02
|
||||
8.000000000000000000e+01
|
||||
6.800000000000000000e+01
|
||||
3.320000000000000000e+02
|
||||
2.480000000000000000e+02
|
||||
8.400000000000000000e+01
|
||||
2.000000000000000000e+02
|
||||
5.500000000000000000e+01
|
||||
8.500000000000000000e+01
|
||||
8.900000000000000000e+01
|
||||
3.100000000000000000e+01
|
||||
1.290000000000000000e+02
|
||||
8.300000000000000000e+01
|
||||
2.750000000000000000e+02
|
||||
6.500000000000000000e+01
|
||||
1.980000000000000000e+02
|
||||
2.360000000000000000e+02
|
||||
2.530000000000000000e+02
|
||||
1.240000000000000000e+02
|
||||
4.400000000000000000e+01
|
||||
1.720000000000000000e+02
|
||||
1.140000000000000000e+02
|
||||
1.420000000000000000e+02
|
||||
1.090000000000000000e+02
|
||||
1.800000000000000000e+02
|
||||
1.440000000000000000e+02
|
||||
1.630000000000000000e+02
|
||||
1.470000000000000000e+02
|
||||
9.700000000000000000e+01
|
||||
2.200000000000000000e+02
|
||||
1.900000000000000000e+02
|
||||
1.090000000000000000e+02
|
||||
1.910000000000000000e+02
|
||||
1.220000000000000000e+02
|
||||
2.300000000000000000e+02
|
||||
2.420000000000000000e+02
|
||||
2.480000000000000000e+02
|
||||
2.490000000000000000e+02
|
||||
1.920000000000000000e+02
|
||||
1.310000000000000000e+02
|
||||
2.370000000000000000e+02
|
||||
7.800000000000000000e+01
|
||||
1.350000000000000000e+02
|
||||
2.440000000000000000e+02
|
||||
1.990000000000000000e+02
|
||||
2.700000000000000000e+02
|
||||
1.640000000000000000e+02
|
||||
7.200000000000000000e+01
|
||||
9.600000000000000000e+01
|
||||
3.060000000000000000e+02
|
||||
9.100000000000000000e+01
|
||||
2.140000000000000000e+02
|
||||
9.500000000000000000e+01
|
||||
2.160000000000000000e+02
|
||||
2.630000000000000000e+02
|
||||
1.780000000000000000e+02
|
||||
1.130000000000000000e+02
|
||||
2.000000000000000000e+02
|
||||
1.390000000000000000e+02
|
||||
1.390000000000000000e+02
|
||||
8.800000000000000000e+01
|
||||
1.480000000000000000e+02
|
||||
8.800000000000000000e+01
|
||||
2.430000000000000000e+02
|
||||
7.100000000000000000e+01
|
||||
7.700000000000000000e+01
|
||||
1.090000000000000000e+02
|
||||
2.720000000000000000e+02
|
||||
6.000000000000000000e+01
|
||||
5.400000000000000000e+01
|
||||
2.210000000000000000e+02
|
||||
9.000000000000000000e+01
|
||||
3.110000000000000000e+02
|
||||
2.810000000000000000e+02
|
||||
1.820000000000000000e+02
|
||||
3.210000000000000000e+02
|
||||
5.800000000000000000e+01
|
||||
2.620000000000000000e+02
|
||||
2.060000000000000000e+02
|
||||
2.330000000000000000e+02
|
||||
2.420000000000000000e+02
|
||||
1.230000000000000000e+02
|
||||
1.670000000000000000e+02
|
||||
6.300000000000000000e+01
|
||||
1.970000000000000000e+02
|
||||
7.100000000000000000e+01
|
||||
1.680000000000000000e+02
|
||||
1.400000000000000000e+02
|
||||
2.170000000000000000e+02
|
||||
1.210000000000000000e+02
|
||||
2.350000000000000000e+02
|
||||
2.450000000000000000e+02
|
||||
4.000000000000000000e+01
|
||||
5.200000000000000000e+01
|
||||
1.040000000000000000e+02
|
||||
1.320000000000000000e+02
|
||||
8.800000000000000000e+01
|
||||
6.900000000000000000e+01
|
||||
2.190000000000000000e+02
|
||||
7.200000000000000000e+01
|
||||
2.010000000000000000e+02
|
||||
1.100000000000000000e+02
|
||||
5.100000000000000000e+01
|
||||
2.770000000000000000e+02
|
||||
6.300000000000000000e+01
|
||||
1.180000000000000000e+02
|
||||
6.900000000000000000e+01
|
||||
2.730000000000000000e+02
|
||||
2.580000000000000000e+02
|
||||
4.300000000000000000e+01
|
||||
1.980000000000000000e+02
|
||||
2.420000000000000000e+02
|
||||
2.320000000000000000e+02
|
||||
1.750000000000000000e+02
|
||||
9.300000000000000000e+01
|
||||
1.680000000000000000e+02
|
||||
2.750000000000000000e+02
|
||||
2.930000000000000000e+02
|
||||
2.810000000000000000e+02
|
||||
7.200000000000000000e+01
|
||||
1.400000000000000000e+02
|
||||
1.890000000000000000e+02
|
||||
1.810000000000000000e+02
|
||||
2.090000000000000000e+02
|
||||
1.360000000000000000e+02
|
||||
2.610000000000000000e+02
|
||||
1.130000000000000000e+02
|
||||
1.310000000000000000e+02
|
||||
1.740000000000000000e+02
|
||||
2.570000000000000000e+02
|
||||
5.500000000000000000e+01
|
||||
8.400000000000000000e+01
|
||||
4.200000000000000000e+01
|
||||
1.460000000000000000e+02
|
||||
2.120000000000000000e+02
|
||||
2.330000000000000000e+02
|
||||
9.100000000000000000e+01
|
||||
1.110000000000000000e+02
|
||||
1.520000000000000000e+02
|
||||
1.200000000000000000e+02
|
||||
6.700000000000000000e+01
|
||||
3.100000000000000000e+02
|
||||
9.400000000000000000e+01
|
||||
1.830000000000000000e+02
|
||||
6.600000000000000000e+01
|
||||
1.730000000000000000e+02
|
||||
7.200000000000000000e+01
|
||||
4.900000000000000000e+01
|
||||
6.400000000000000000e+01
|
||||
4.800000000000000000e+01
|
||||
1.780000000000000000e+02
|
||||
1.040000000000000000e+02
|
||||
1.320000000000000000e+02
|
||||
2.200000000000000000e+02
|
||||
5.700000000000000000e+01
|
||||
|
@@ -80,9 +80,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Register input and output datasets\n",
|
||||
"## Create trained model\n",
|
||||
"\n",
|
||||
"For this example, we have provided a small model (`sklearn_regression_model.pkl` in the notebook's directory) that was trained on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset). Here, you will register the data used to create this model in your workspace."
|
||||
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset). "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -91,9 +91,42 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"\n",
|
||||
"from sklearn.datasets import load_diabetes\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"dataset_x, dataset_y = load_diabetes(return_X_y=True)\n",
|
||||
"\n",
|
||||
"model = Ridge().fit(dataset_x, dataset_y)\n",
|
||||
"\n",
|
||||
"joblib.dump(model, 'sklearn_regression_model.pkl')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Register input and output datasets\n",
|
||||
"\n",
|
||||
"Here, you will register the data used to create the model in your workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import numpy as np\n",
|
||||
"\n",
|
||||
"from azureml.core import Dataset\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"np.savetxt('features.csv', dataset_x, delimiter=',')\n",
|
||||
"np.savetxt('labels.csv', dataset_y, delimiter=',')\n",
|
||||
"\n",
|
||||
"datastore = ws.get_default_datastore()\n",
|
||||
"datastore.upload_files(files=['./features.csv', './labels.csv'],\n",
|
||||
" target_path='sklearn_regression/',\n",
|
||||
@@ -125,6 +158,8 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import sklearn\n",
|
||||
"\n",
|
||||
"from azureml.core import Model\n",
|
||||
"from azureml.core.resource_configuration import ResourceConfiguration\n",
|
||||
"\n",
|
||||
@@ -133,7 +168,7 @@
|
||||
" model_name='my-sklearn-model', # Name of the registered model in your workspace.\n",
|
||||
" model_path='./sklearn_regression_model.pkl', # Local file to upload and register as a model.\n",
|
||||
" model_framework=Model.Framework.SCIKITLEARN, # Framework used to create the model.\n",
|
||||
" model_framework_version='0.19.1', # Version of scikit-learn used to create the model.\n",
|
||||
" model_framework_version=sklearn.__version__, # Version of scikit-learn used to create the model.\n",
|
||||
" sample_input_dataset=input_dataset,\n",
|
||||
" sample_output_dataset=output_dataset,\n",
|
||||
" resource_configuration=ResourceConfiguration(cpu=1, memory_in_gb=0.5),\n",
|
||||
@@ -174,19 +209,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Webservice\n",
|
||||
"from azureml.exceptions import WebserviceException\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"service_name = 'my-sklearn-service'\n",
|
||||
"\n",
|
||||
"# Remove any existing service under the same name.\n",
|
||||
"try:\n",
|
||||
" Webservice(ws, service_name).delete()\n",
|
||||
"except WebserviceException:\n",
|
||||
" pass\n",
|
||||
"\n",
|
||||
"service = Model.deploy(ws, service_name, [model])\n",
|
||||
"service = Model.deploy(ws, service_name, [model], overwrite=True)\n",
|
||||
"service.wait_for_deployment(show_output=True)"
|
||||
]
|
||||
},
|
||||
@@ -207,10 +232,7 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"input_payload = json.dumps({\n",
|
||||
" 'data': [\n",
|
||||
" [ 0.03807591, 0.05068012, 0.06169621, 0.02187235, -0.0442235,\n",
|
||||
" -0.03482076, -0.04340085, -0.00259226, 0.01990842, -0.01764613]\n",
|
||||
" ],\n",
|
||||
" 'data': dataset_x[0:2].tolist(),\n",
|
||||
" 'method': 'predict' # If you have a classification model, you can get probabilities by changing this to 'predict_proba'.\n",
|
||||
"})\n",
|
||||
"\n",
|
||||
@@ -262,7 +284,7 @@
|
||||
" 'inference-schema[numpy-support]',\n",
|
||||
" 'joblib',\n",
|
||||
" 'numpy',\n",
|
||||
" 'scikit-learn'\n",
|
||||
" 'scikit-learn=={}'.format(sklearn.__version__)\n",
|
||||
"])"
|
||||
]
|
||||
},
|
||||
@@ -303,20 +325,12 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Webservice\n",
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"from azureml.core.webservice import AciWebservice\n",
|
||||
"from azureml.exceptions import WebserviceException\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"service_name = 'my-custom-env-service'\n",
|
||||
"\n",
|
||||
"# Remove any existing service under the same name.\n",
|
||||
"try:\n",
|
||||
" Webservice(ws, service_name).delete()\n",
|
||||
"except WebserviceException:\n",
|
||||
" pass\n",
|
||||
"\n",
|
||||
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
|
||||
"aci_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
|
||||
"\n",
|
||||
@@ -324,7 +338,8 @@
|
||||
" name=service_name,\n",
|
||||
" models=[model],\n",
|
||||
" inference_config=inference_config,\n",
|
||||
" deployment_config=aci_config)\n",
|
||||
" deployment_config=aci_config,\n",
|
||||
" overwrite=True)\n",
|
||||
"service.wait_for_deployment(show_output=True)"
|
||||
]
|
||||
},
|
||||
@@ -342,10 +357,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"input_payload = json.dumps({\n",
|
||||
" 'data': [\n",
|
||||
" [ 0.03807591, 0.05068012, 0.06169621, 0.02187235, -0.0442235,\n",
|
||||
" -0.03482076, -0.04340085, -0.00259226, 0.01990842, -0.01764613]\n",
|
||||
" ]\n",
|
||||
" 'data': dataset_x[0:2].tolist()\n",
|
||||
"})\n",
|
||||
"\n",
|
||||
"output = service.run(input_payload)\n",
|
||||
@@ -471,7 +483,7 @@
|
||||
" 'inference-schema[numpy-support]',\n",
|
||||
" 'joblib',\n",
|
||||
" 'numpy',\n",
|
||||
" 'scikit-learn'\n",
|
||||
" 'scikit-learn=={}'.format(sklearn.__version__)\n",
|
||||
"])\n",
|
||||
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
|
||||
"# if cpu and memory_in_gb parameters are not provided\n",
|
||||
|
||||
@@ -2,3 +2,5 @@ name: model-register-and-deploy
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- numpy
|
||||
- scikit-learn
|
||||
|
||||
Binary file not shown.
@@ -1,8 +0,0 @@
|
||||
name: project_environment
|
||||
dependencies:
|
||||
- python=3.6.2
|
||||
- pip:
|
||||
- azureml-defaults
|
||||
- scikit-learn==0.19.1
|
||||
- numpy
|
||||
- inference-schema[numpy-support]
|
||||
@@ -75,6 +75,33 @@
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create trained model\n",
|
||||
"\n",
|
||||
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset). "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"\n",
|
||||
"from sklearn.datasets import load_diabetes\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"\n",
|
||||
"dataset_x, dataset_y = load_diabetes(return_X_y=True)\n",
|
||||
"\n",
|
||||
"sk_model = Ridge().fit(dataset_x, dataset_y)\n",
|
||||
"\n",
|
||||
"joblib.dump(sk_model, \"sklearn_regression_model.pkl\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -148,13 +175,10 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile source_directory/x/y/score.py\n",
|
||||
"import os\n",
|
||||
"import pickle\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"from sklearn.externals import joblib\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from inference_schema.schema_decorators import input_schema, output_schema\n",
|
||||
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
|
||||
@@ -165,16 +189,17 @@
|
||||
" # It holds the path to the directory that contains the deployed model (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # If there are multiple models, this value is the path to the directory containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # deserialize the model file back into a sklearn model\n",
|
||||
" # Deserialize the model file back into a sklearn model.\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
" global name\n",
|
||||
" # note here, entire source directory on inference config gets added into image\n",
|
||||
" # bellow is the example how you can use any extra files in image\n",
|
||||
" # Note here, the entire source directory from inference config gets added into image.\n",
|
||||
" # Below is an example of how you can use any extra files in image.\n",
|
||||
" with open('./source_directory/extradata.json') as json_file:\n",
|
||||
" data = json.load(json_file)\n",
|
||||
" name = data[\"people\"][0][\"name\"]\n",
|
||||
"\n",
|
||||
"input_sample = np.array([[10,9,8,7,6,5,4,3,2,1]])\n",
|
||||
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
|
||||
"output_sample = np.array([3726.995])\n",
|
||||
"\n",
|
||||
"@input_schema('data', NumpyParameterType(input_sample))\n",
|
||||
@@ -182,37 +207,13 @@
|
||||
"def run(data):\n",
|
||||
" try:\n",
|
||||
" result = model.predict(data)\n",
|
||||
" # you can return any datatype as long as it is JSON-serializable\n",
|
||||
" # You can return any JSON-serializable object.\n",
|
||||
" return \"Hello \" + name + \" here is your result = \" + str(result)\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency for your environemnt. This package contains the functionality needed to host the model as a web service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile source_directory/env/myenv.yml\n",
|
||||
"name: project_environment\n",
|
||||
"dependencies:\n",
|
||||
" - python=3.6.2\n",
|
||||
" - pip:\n",
|
||||
" - azureml-defaults\n",
|
||||
" - scikit-learn\n",
|
||||
" - numpy\n",
|
||||
" - inference-schema[numpy-support]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -249,11 +250,16 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import sklearn\n",
|
||||
"\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"myenv = Environment.from_conda_specification(name='myenv', file_path='myenv.yml')\n",
|
||||
"myenv = Environment('myenv')\n",
|
||||
"myenv.python.conda_dependencies.add_pip_package(\"inference-schema[numpy-support]\")\n",
|
||||
"myenv.python.conda_dependencies.add_pip_package(\"joblib\")\n",
|
||||
"myenv.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))\n",
|
||||
"\n",
|
||||
"# explicitly set base_image to None when setting base_dockerfile\n",
|
||||
"myenv.docker.base_image = None\n",
|
||||
@@ -262,7 +268,7 @@
|
||||
"\n",
|
||||
"inference_config = InferenceConfig(source_directory=source_directory,\n",
|
||||
" entry_script=\"x/y/score.py\",\n",
|
||||
" environment=myenv)\n"
|
||||
" environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -352,15 +358,10 @@
|
||||
"import json\n",
|
||||
"\n",
|
||||
"sample_input = json.dumps({\n",
|
||||
" 'data': [\n",
|
||||
" [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n",
|
||||
" [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]\n",
|
||||
" ]\n",
|
||||
" 'data': dataset_x[0:2].tolist()\n",
|
||||
"})\n",
|
||||
"\n",
|
||||
"sample_input = bytes(sample_input, encoding='utf-8')\n",
|
||||
"\n",
|
||||
"print(local_service.run(input_data=sample_input))"
|
||||
"print(local_service.run(sample_input))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -379,12 +380,10 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile source_directory/x/y/score.py\n",
|
||||
"import os\n",
|
||||
"import pickle\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"from sklearn.externals import joblib\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from inference_schema.schema_decorators import input_schema, output_schema\n",
|
||||
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
|
||||
@@ -395,17 +394,18 @@
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # deserialize the model file back into a sklearn model\n",
|
||||
" # Deserialize the model file back into a sklearn model.\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
" global name, from_location\n",
|
||||
" # note here, entire source directory on inference config gets added into image\n",
|
||||
" # bellow is the example how you can use any extra files in image\n",
|
||||
" # Note here, the entire source directory from inference config gets added into image.\n",
|
||||
" # Below is an example of how you can use any extra files in image.\n",
|
||||
" with open('source_directory/extradata.json') as json_file: \n",
|
||||
" data = json.load(json_file)\n",
|
||||
" name = data[\"people\"][0][\"name\"]\n",
|
||||
" from_location = data[\"people\"][0][\"from\"]\n",
|
||||
"\n",
|
||||
"input_sample = np.array([[10,9,8,7,6,5,4,3,2,1]])\n",
|
||||
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
|
||||
"output_sample = np.array([3726.995])\n",
|
||||
"\n",
|
||||
"@input_schema('data', NumpyParameterType(input_sample))\n",
|
||||
@@ -413,8 +413,8 @@
|
||||
"def run(data):\n",
|
||||
" try:\n",
|
||||
" result = model.predict(data)\n",
|
||||
" # you can return any datatype as long as it is JSON-serializable\n",
|
||||
" return \"Hello \" + name + \" from \" + from_location + \" here is your result = \" + str(result)\n",
|
||||
" # You can return any JSON-serializable object.\n",
|
||||
" return \"Hello \" + name + \" from \" + from_location + \" here is your result = \" + str(result)\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
@@ -430,7 +430,7 @@
|
||||
"print(\"--------------------------------------------------------------\")\n",
|
||||
"\n",
|
||||
"# After calling reload(), run() will return the updated message.\n",
|
||||
"local_service.run(input_data=sample_input)"
|
||||
"local_service.run(sample_input)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
name: register-model-deploy-local-advanced
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- scikit-learn
|
||||
@@ -71,6 +71,33 @@
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create trained model\n",
|
||||
"\n",
|
||||
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset). "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"\n",
|
||||
"from sklearn.datasets import load_diabetes\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"\n",
|
||||
"dataset_x, dataset_y = load_diabetes(return_X_y=True)\n",
|
||||
"\n",
|
||||
"sk_model = Ridge().fit(dataset_x, dataset_y)\n",
|
||||
"\n",
|
||||
"joblib.dump(sk_model, \"sklearn_regression_model.pkl\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -82,9 +109,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can add tags and descriptions to your models. we are using `sklearn_regression_model.pkl` file in the current directory as a model with the name `sklearn_regression_model` in the workspace.\n",
|
||||
"Here we are registering the serialized file `sklearn_regression_model.pkl` in the current directory as a model with the name `sklearn_regression_model` in the workspace.\n",
|
||||
"\n",
|
||||
"Using tags, you can track useful information such as the name and version of the machine learning library used to train the model, framework, category, target customer etc. Note that tags must be alphanumeric."
|
||||
"You can add tags and descriptions to your models. Using tags, you can track useful information such as the name and version of the machine learning library used to train the model, framework, category, target customer etc. Note that tags must be alphanumeric."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -119,11 +146,62 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"import sklearn\n",
|
||||
"\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"\n",
|
||||
"environment = Environment(\"LocalDeploy\")\n",
|
||||
"environment.python.conda_dependencies = CondaDependencies(\"myenv.yml\")"
|
||||
"environment.python.conda_dependencies.add_pip_package(\"inference-schema[numpy-support]\")\n",
|
||||
"environment.python.conda_dependencies.add_pip_package(\"joblib\")\n",
|
||||
"environment.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Provide the Scoring Script\n",
|
||||
"\n",
|
||||
"This Python script handles the model execution inside the service container. The `init()` method loads the model file, and `run(data)` is called for every input to the service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from inference_schema.schema_decorators import input_schema, output_schema\n",
|
||||
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # Deserialize the model file back into a sklearn model.\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
|
||||
"output_sample = np.array([3726.995])\n",
|
||||
"\n",
|
||||
"@input_schema('data', NumpyParameterType(input_sample))\n",
|
||||
"@output_schema(NumpyParameterType(output_sample))\n",
|
||||
"def run(data):\n",
|
||||
" try:\n",
|
||||
" result = model.predict(data)\n",
|
||||
" # You can return any JSON-serializable object.\n",
|
||||
" return result.tolist()\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -145,114 +223,6 @@
|
||||
" environment=environment)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Model Profiling\n",
|
||||
"\n",
|
||||
"Profile your model to understand how much CPU and memory the service, created as a result of its deployment, will need. Profiling returns information such as CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage. You can profile your model (or more precisely the service built based on your model) on any CPU and/or memory combination where 0.1 <= CPU <= 3.5 and 0.1GB <= memory <= 15GB. If you do not provide a CPU and/or memory requirement, we will test it on the default configuration of 3.5 CPU and 15GB memory.\n",
|
||||
"\n",
|
||||
"In order to profile your model you will need:\n",
|
||||
"- a registered model\n",
|
||||
"- an entry script\n",
|
||||
"- an inference configuration\n",
|
||||
"- a single column tabular dataset, where each row contains a string representing sample request data sent to the service.\n",
|
||||
"\n",
|
||||
"Please, note that profiling is a long running operation and can take up to 25 minutes depending on the size of the dataset.\n",
|
||||
"\n",
|
||||
"At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.\n",
|
||||
"\n",
|
||||
"Below is an example of how you can construct an input dataset to profile a service which expects its incoming requests to contain serialized json. In this case we created a dataset based one hundred instances of the same request data. In real world scenarios however, we suggest that you use larger datasets with various inputs, especially if your model resource usage/behavior is input dependent."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"from azureml.core import Datastore\n",
|
||||
"from azureml.core.dataset import Dataset\n",
|
||||
"from azureml.data import dataset_type_definitions\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# create a string that can be put in the body of the request\n",
|
||||
"serialized_input_json = json.dumps({\n",
|
||||
" 'data': [\n",
|
||||
" [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n",
|
||||
" [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]\n",
|
||||
" ]\n",
|
||||
"})\n",
|
||||
"dataset_content = []\n",
|
||||
"for i in range(100):\n",
|
||||
" dataset_content.append(serialized_input_json)\n",
|
||||
"dataset_content = '\\n'.join(dataset_content)\n",
|
||||
"file_name = 'sample_request_data_diabetes.txt'\n",
|
||||
"f = open(file_name, 'w')\n",
|
||||
"f.write(dataset_content)\n",
|
||||
"f.close()\n",
|
||||
"\n",
|
||||
"# upload the txt file created above to the Datastore and create a dataset from it\n",
|
||||
"data_store = Datastore.get_default(ws)\n",
|
||||
"data_store.upload_files(['./' + file_name], target_path='sample_request_data_diabetes')\n",
|
||||
"datastore_path = [(data_store, 'sample_request_data_diabetes' +'/' + file_name)]\n",
|
||||
"sample_request_data_diabetes = Dataset.Tabular.from_delimited_files(\n",
|
||||
" datastore_path,\n",
|
||||
" separator='\\n',\n",
|
||||
" infer_column_types=True,\n",
|
||||
" header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)\n",
|
||||
"sample_request_data_diabetes = sample_request_data_diabetes.register(workspace=ws,\n",
|
||||
" name='sample_request_data_diabetes',\n",
|
||||
" create_new_version=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that we have an input dataset we are ready to go ahead with profiling. In this case we are testing the previously introduced sklearn regression model on 1 CPU and 0.5 GB memory. The memory usage and recommendation presented in the result is measured in Gigabytes. The CPU usage and recommendation is measured in CPU cores."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from datetime import datetime\n",
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"from azureml.core.model import Model, InferenceConfig\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"environment = Environment('my-sklearn-environment')\n",
|
||||
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
|
||||
" 'azureml-defaults',\n",
|
||||
" 'inference-schema[numpy-support]',\n",
|
||||
" 'joblib',\n",
|
||||
" 'numpy',\n",
|
||||
" 'scikit-learn==0.19.1',\n",
|
||||
" 'scipy'\n",
|
||||
"])\n",
|
||||
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
|
||||
"# if cpu and memory_in_gb parameters are not provided\n",
|
||||
"# the model will be profiled on default configuration of\n",
|
||||
"# 3.5CPU and 15GB memory\n",
|
||||
"profile = Model.profile(ws,\n",
|
||||
" 'profile-%s' % datetime.now().strftime('%m%d%Y-%H%M%S'),\n",
|
||||
" [model],\n",
|
||||
" inference_config,\n",
|
||||
" input_dataset=sample_request_data_diabetes,\n",
|
||||
" cpu=1.0,\n",
|
||||
" memory_in_gb=0.5)\n",
|
||||
"\n",
|
||||
"# profiling is a long running operation and may take up to 25 min\n",
|
||||
"profile.wait_for_completion(True)\n",
|
||||
"details = profile.get_details()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -339,15 +309,10 @@
|
||||
"import json\n",
|
||||
"\n",
|
||||
"sample_input = json.dumps({\n",
|
||||
" 'data': [\n",
|
||||
" [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n",
|
||||
" [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]\n",
|
||||
" ]\n",
|
||||
" 'data': dataset_x[0:2].tolist()\n",
|
||||
"})\n",
|
||||
"\n",
|
||||
"sample_input = bytes(sample_input, encoding='utf-8')\n",
|
||||
"\n",
|
||||
"local_service.run(input_data=sample_input)"
|
||||
"local_service.run(sample_input)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -366,12 +331,10 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import os\n",
|
||||
"import pickle\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"from sklearn.externals import joblib\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from inference_schema.schema_decorators import input_schema, output_schema\n",
|
||||
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
|
||||
@@ -382,10 +345,10 @@
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # deserialize the model file back into a sklearn model\n",
|
||||
" # Deserialize the model file back into a sklearn model.\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
"input_sample = np.array([[10,9,8,7,6,5,4,3,2,1]])\n",
|
||||
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
|
||||
"output_sample = np.array([3726.995])\n",
|
||||
"\n",
|
||||
"@input_schema('data', NumpyParameterType(input_sample))\n",
|
||||
@@ -393,8 +356,8 @@
|
||||
"def run(data):\n",
|
||||
" try:\n",
|
||||
" result = model.predict(data)\n",
|
||||
" # you can return any datatype as long as it is JSON-serializable\n",
|
||||
" return 'hello from updated score.py'\n",
|
||||
" # You can return any JSON-serializable object.\n",
|
||||
" return 'Hello from the updated score.py: ' + str(result.tolist())\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
@@ -410,7 +373,7 @@
|
||||
"print(\"--------------------------------------------------------------\")\n",
|
||||
"\n",
|
||||
"# After calling reload(), run() will return the updated message.\n",
|
||||
"local_service.run(input_data=sample_input)"
|
||||
"local_service.run(sample_input)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
name: register-model-deploy-local
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- scikit-learn
|
||||
@@ -1,35 +0,0 @@
|
||||
import os
|
||||
import pickle
|
||||
import json
|
||||
import numpy as np
|
||||
from sklearn.externals import joblib
|
||||
from sklearn.linear_model import Ridge
|
||||
|
||||
from inference_schema.schema_decorators import input_schema, output_schema
|
||||
from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
|
||||
|
||||
|
||||
def init():
|
||||
global model
|
||||
# AZUREML_MODEL_DIR is an environment variable created during deployment.
|
||||
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
|
||||
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
|
||||
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
|
||||
# deserialize the model file back into a sklearn model
|
||||
model = joblib.load(model_path)
|
||||
|
||||
|
||||
input_sample = np.array([[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]])
|
||||
output_sample = np.array([3726.995])
|
||||
|
||||
|
||||
@input_schema('data', NumpyParameterType(input_sample))
|
||||
@output_schema(NumpyParameterType(output_sample))
|
||||
def run(data):
|
||||
try:
|
||||
result = model.predict(data)
|
||||
# you can return any datatype as long as it is JSON-serializable
|
||||
return result.tolist()
|
||||
except Exception as e:
|
||||
error = str(e)
|
||||
return error
|
||||
Binary file not shown.
@@ -149,7 +149,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"%%writefile score_ssl.py\n",
|
||||
"import os\n",
|
||||
"import pickle\n",
|
||||
"import json\n",
|
||||
@@ -201,7 +201,7 @@
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"inf_config = InferenceConfig(entry_script='score.py', environment=myenv)"
|
||||
"inf_config = InferenceConfig(entry_script='score_ssl.py', environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -109,7 +109,7 @@
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
||||
"\n",
|
||||
"conda_deps = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.19.1','scipy'], pip_packages=['azureml-defaults'])\n",
|
||||
"conda_deps = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.19.1','scipy'], pip_packages=['azureml-defaults', 'inference-schema'])\n",
|
||||
"myenv = Environment(name='myenv')\n",
|
||||
"myenv.python.conda_dependencies = conda_deps"
|
||||
]
|
||||
|
||||
@@ -204,108 +204,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Option 1: Provision as a run based compute target\n",
|
||||
"### Option 1: Provision a compute target (Basic)\n",
|
||||
"\n",
|
||||
"You can provision AmlCompute as a compute target at run-time. In this case, the compute is auto-created for your run, scales up to max_nodes that you specify, and then **deleted automatically** after the run completes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"from azureml.core.runconfig import DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"# create a new runconfig object\n",
|
||||
"run_config = RunConfiguration()\n",
|
||||
"\n",
|
||||
"# signal that you want to use AmlCompute to execute script.\n",
|
||||
"run_config.target = \"amlcompute\"\n",
|
||||
"\n",
|
||||
"# AmlCompute will be created in the same region as workspace\n",
|
||||
"# Set vm size for AmlCompute\n",
|
||||
"run_config.amlcompute.vm_size = 'STANDARD_D2_V2'\n",
|
||||
"\n",
|
||||
"# enable Docker \n",
|
||||
"run_config.environment.docker.enabled = True\n",
|
||||
"\n",
|
||||
"# set Docker base image to the default CPU-based image\n",
|
||||
"run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
|
||||
"run_config.environment.python.user_managed_dependencies = False\n",
|
||||
"\n",
|
||||
"azureml_pip_packages = [\n",
|
||||
" 'azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',\n",
|
||||
" 'azureml-interpret', 'sklearn-pandas', 'azureml-dataprep'\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"# Note: this is to pin the scikit-learn and pandas versions to be same as notebook.\n",
|
||||
"# In production scenario user would choose their dependencies\n",
|
||||
"import pkg_resources\n",
|
||||
"available_packages = pkg_resources.working_set\n",
|
||||
"sklearn_ver = None\n",
|
||||
"pandas_ver = None\n",
|
||||
"for dist in available_packages:\n",
|
||||
" if dist.key == 'scikit-learn':\n",
|
||||
" sklearn_ver = dist.version\n",
|
||||
" elif dist.key == 'pandas':\n",
|
||||
" pandas_ver = dist.version\n",
|
||||
"sklearn_dep = 'scikit-learn'\n",
|
||||
"pandas_dep = 'pandas'\n",
|
||||
"if sklearn_ver:\n",
|
||||
" sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)\n",
|
||||
"if pandas_ver:\n",
|
||||
" pandas_dep = 'pandas=={}'.format(pandas_ver)\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"# The CondaDependencies specifies the conda and pip packages that are installed in the environment\n",
|
||||
"# the submitted job is run in. Note the remote environment(s) needs to be similar to the local\n",
|
||||
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
|
||||
"# cause errors. Please take extra care when specifying your dependencies in a production environment.\n",
|
||||
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=[sklearn_dep, pandas_dep],\n",
|
||||
" pip_packages=azureml_pip_packages)\n",
|
||||
"\n",
|
||||
"# Now submit a run on AmlCompute\n",
|
||||
"from azureml.core.script_run_config import ScriptRunConfig\n",
|
||||
"\n",
|
||||
"script_run_config = ScriptRunConfig(source_directory=project_folder,\n",
|
||||
" script='train_explain.py',\n",
|
||||
" run_config=run_config)\n",
|
||||
"\n",
|
||||
"run = experiment.submit(script_run_config)\n",
|
||||
"\n",
|
||||
"# Show run details\n",
|
||||
"run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"# Shows output of the run on stdout.\n",
|
||||
"run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Option 2: Provision as a persistent compute target (Basic)\n",
|
||||
"\n",
|
||||
"You can provision a persistent AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.\n",
|
||||
"You can provision an AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.\n",
|
||||
"\n",
|
||||
"* `vm_size`: VM family of the nodes provisioned by AmlCompute. Simply choose from the supported_vmsizes() above\n",
|
||||
"* `max_nodes`: Maximum nodes to autoscale to while running a job on AmlCompute"
|
||||
@@ -351,13 +252,13 @@
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"# Create a new RunConfig object\n",
|
||||
"run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to AmlCompute target created in previous step\n",
|
||||
"run_config.target = cpu_cluster.name\n",
|
||||
"\n",
|
||||
"# enable Docker \n",
|
||||
"# Enable Docker \n",
|
||||
"run_config.environment.docker.enabled = True\n",
|
||||
"\n",
|
||||
"azureml_pip_packages = [\n",
|
||||
@@ -382,7 +283,7 @@
|
||||
" sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)\n",
|
||||
"if pandas_ver:\n",
|
||||
" pandas_dep = 'pandas=={}'.format(pandas_ver)\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"# Specify CondaDependencies obj\n",
|
||||
"# The CondaDependencies specifies the conda and pip packages that are installed in the environment\n",
|
||||
"# the submitted job is run in. Note the remote environment(s) needs to be similar to the local\n",
|
||||
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
|
||||
@@ -400,6 +301,13 @@
|
||||
"run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -424,7 +332,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Option 3: Provision as a persistent compute target (Advanced)\n",
|
||||
"### Option 2: Provision a compute target (Advanced)\n",
|
||||
"\n",
|
||||
"You can also specify additional properties or change defaults while provisioning AmlCompute using a more advanced configuration. This is useful when you want a dedicated cluster of 4 nodes (for example you can set the min_nodes and max_nodes to 4), or want the compute to be within an existing VNet in your subscription.\n",
|
||||
"\n",
|
||||
@@ -483,13 +391,13 @@
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"# Create a new RunConfig object\n",
|
||||
"run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to AmlCompute target created in previous step\n",
|
||||
"run_config.target = cpu_cluster.name\n",
|
||||
"\n",
|
||||
"# enable Docker \n",
|
||||
"# Enable Docker \n",
|
||||
"run_config.environment.docker.enabled = True\n",
|
||||
"\n",
|
||||
"azureml_pip_packages = [\n",
|
||||
@@ -516,7 +424,7 @@
|
||||
" sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)\n",
|
||||
"if pandas_ver:\n",
|
||||
" pandas_dep = 'pandas=={}'.format(pandas_ver)\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"# Specify CondaDependencies obj\n",
|
||||
"# The CondaDependencies specifies the conda and pip packages that are installed in the environment\n",
|
||||
"# the submitted job is run in. Note the remote environment(s) needs to be similar to the local\n",
|
||||
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
|
||||
@@ -554,19 +462,6 @@
|
||||
"run.get_metrics()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient\n",
|
||||
"\n",
|
||||
"client = ExplanationClient.from_run(run)\n",
|
||||
"# Get the top k (e.g., 4) most important features with their importance values\n",
|
||||
"explanation = client.download_model_explanation(top_k=4)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -682,7 +577,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# retrieve model for visualization and deployment\n",
|
||||
"# Retrieve model for visualization and deployment\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"import joblib\n",
|
||||
"original_model = Model(ws, 'model_explain_model_on_amlcomp')\n",
|
||||
@@ -703,7 +598,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# retrieve x_test for visualization\n",
|
||||
"# Retrieve x_test for visualization\n",
|
||||
"import joblib\n",
|
||||
"x_test_path = './x_test_boston_housing.pkl'\n",
|
||||
"run.download_file('x_test_boston_housing.pkl', output_file_path=x_test_path)"
|
||||
|
||||
@@ -122,7 +122,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# get the IBM employee attrition dataset\n",
|
||||
"# Get the IBM employee attrition dataset\n",
|
||||
"outdirname = 'dataset.6.21.19'\n",
|
||||
"try:\n",
|
||||
" from urllib import urlretrieve\n",
|
||||
@@ -163,7 +163,7 @@
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"x_train, x_test, y_train, y_test = train_test_split(attritionXData, \n",
|
||||
" target, \n",
|
||||
" test_size = 0.2,\n",
|
||||
" test_size=0.2,\n",
|
||||
" random_state=0,\n",
|
||||
" stratify=target)"
|
||||
]
|
||||
@@ -223,7 +223,7 @@
|
||||
"# Append classifier to preprocessing pipeline.\n",
|
||||
"# Now we have a full prediction pipeline.\n",
|
||||
"clf = Pipeline(steps=[('preprocessor', transformations),\n",
|
||||
" ('classifier', SVC(kernel='linear', C = 1.0, probability=True))])"
|
||||
" ('classifier', SVC(C=1.0, probability=True))])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -249,7 +249,7 @@
|
||||
"# Append classifier to preprocessing pipeline.\n",
|
||||
"# Now we have a full prediction pipeline.\n",
|
||||
"clf = Pipeline(steps=[('preprocessor', transformations),\n",
|
||||
" ('classifier', SVC(kernel='linear', C = 1.0, probability=True))]) \n",
|
||||
" ('classifier', SVC(C=1.0, probability=True))]) \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@@ -393,7 +393,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# feature shap values for all features and all data points in the training data\n",
|
||||
"# Feature shap values for all features and all data points in the training data\n",
|
||||
"print('local importance values: {}'.format(global_explanation.local_importance_values))"
|
||||
]
|
||||
},
|
||||
@@ -450,8 +450,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"from azureml.core import Workspace, Experiment, Run\n",
|
||||
"from interpret.ext.blackbox import TabularExplainer\n",
|
||||
"from azureml.core import Workspace, Experiment\n",
|
||||
"from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient\n",
|
||||
"# Check core SDK version number\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
@@ -576,6 +575,23 @@
|
||||
"ExplanationDashboard(downloaded_global_explanation, model, datasetX=x_test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## End\n",
|
||||
"Complete the run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run.complete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -141,7 +141,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# get IBM attrition data\n",
|
||||
"# Get IBM attrition data\n",
|
||||
"import os\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
@@ -218,17 +218,17 @@
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"x_train, x_test, y_train, y_test = train_test_split(attritionXData,\n",
|
||||
" target,\n",
|
||||
" test_size = 0.2,\n",
|
||||
" test_size=0.2,\n",
|
||||
" random_state=0,\n",
|
||||
" stratify=target)\n",
|
||||
"\n",
|
||||
"# preprocess the data and fit the classification model\n",
|
||||
"# Preprocess the data and fit the classification model\n",
|
||||
"clf.fit(x_train, y_train)\n",
|
||||
"model = clf.steps[-1][1]\n",
|
||||
"\n",
|
||||
"model_file_name = 'log_reg.pkl'\n",
|
||||
"\n",
|
||||
"# save model in the outputs folder so it automatically get uploaded\n",
|
||||
"# Save model in the outputs folder so it automatically get uploaded\n",
|
||||
"with open(model_file_name, 'wb') as file:\n",
|
||||
" joblib.dump(value=clf, filename=os.path.join('./outputs/',\n",
|
||||
" model_file_name))"
|
||||
@@ -345,7 +345,7 @@
|
||||
" sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)\n",
|
||||
"if pandas_ver:\n",
|
||||
" pandas_dep = 'pandas=={}'.format(pandas_ver)\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"# Specify CondaDependencies obj\n",
|
||||
"# The CondaDependencies specifies the conda and pip packages that are installed in the environment\n",
|
||||
"# the submitted job is run in. Note the remote environment(s) needs to be similar to the local\n",
|
||||
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
|
||||
@@ -368,7 +368,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"# retrieve scoring explainer for deployment\n",
|
||||
"# Retrieve scoring explainer for deployment\n",
|
||||
"scoring_explainer_model = Model(ws, 'IBM_attrition_explainer')"
|
||||
]
|
||||
},
|
||||
@@ -416,11 +416,11 @@
|
||||
"\n",
|
||||
"headers = {'Content-Type':'application/json'}\n",
|
||||
"\n",
|
||||
"# send request to service\n",
|
||||
"# Send request to service\n",
|
||||
"print(\"POST to url\", service.scoring_uri)\n",
|
||||
"resp = requests.post(service.scoring_uri, sample_data, headers=headers)\n",
|
||||
"\n",
|
||||
"# can covert back to Python objects from json string if desired\n",
|
||||
"# Can covert back to Python objects from json string if desired\n",
|
||||
"print(\"prediction:\", resp.text)\n",
|
||||
"result = json.loads(resp.text)"
|
||||
]
|
||||
@@ -431,7 +431,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#plot the feature importance for the prediction\n",
|
||||
"# Plot the feature importance for the prediction\n",
|
||||
"import numpy as np\n",
|
||||
"import matplotlib.pyplot as plt; plt.rcdefaults()\n",
|
||||
"\n",
|
||||
|
||||
@@ -156,7 +156,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Submit an AmlCompute run in a few different ways\n",
|
||||
"## Submit an AmlCompute run\n",
|
||||
"\n",
|
||||
"First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.\n",
|
||||
"\n",
|
||||
@@ -202,9 +202,43 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Provision as a run based compute target\n",
|
||||
"### Provision a compute target\n",
|
||||
"\n",
|
||||
"You can provision AmlCompute as a compute target at run-time. In this case, the compute is auto-created for your run, scales up to max_nodes that you specify, and then **deleted automatically** after the run completes."
|
||||
"You can provision an AmlCompute resource by simply defining two parameters thanks to smart defaults. By default it autoscales from 0 nodes and provisions dedicated VMs to run your job in a container. This is useful when you want to continously re-use the same target, debug it between jobs or simply share the resource with other users of your workspace.\n",
|
||||
"\n",
|
||||
"* `vm_size`: VM family of the nodes provisioned by AmlCompute. Simply choose from the supported_vmsizes() above\n",
|
||||
"* `max_nodes`: Maximum nodes to autoscale to while running a job on AmlCompute"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# Choose a name for your CPU cluster\n",
|
||||
"cpu_cluster_name = \"cpu-cluster\"\n",
|
||||
"\n",
|
||||
"# Verify that cluster does not exist already\n",
|
||||
"try:\n",
|
||||
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
|
||||
" print('Found existing cluster, use it.')\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',\n",
|
||||
" max_nodes=4)\n",
|
||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
|
||||
"\n",
|
||||
"cpu_cluster.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Configure & Run"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -217,28 +251,21 @@
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"from azureml.core.runconfig import DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"# create a new runconfig object\n",
|
||||
"# Create a new runconfig object\n",
|
||||
"run_config = RunConfiguration()\n",
|
||||
"\n",
|
||||
"# signal that you want to use AmlCompute to execute script.\n",
|
||||
"run_config.target = \"amlcompute\"\n",
|
||||
"# Set compute target to AmlCompute target created in previous step\n",
|
||||
"run_config.target = cpu_cluster.name\n",
|
||||
"\n",
|
||||
"# AmlCompute will be created in the same region as workspace\n",
|
||||
"# Set vm size for AmlCompute\n",
|
||||
"run_config.amlcompute.vm_size = 'STANDARD_D2_V2'\n",
|
||||
"\n",
|
||||
"# enable Docker \n",
|
||||
"# Enable Docker \n",
|
||||
"run_config.environment.docker.enabled = True\n",
|
||||
"\n",
|
||||
"# set Docker base image to the default CPU-based image\n",
|
||||
"# Set Docker base image to the default CPU-based image\n",
|
||||
"run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
|
||||
"# Use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
|
||||
"run_config.environment.python.user_managed_dependencies = False\n",
|
||||
"\n",
|
||||
"# auto-prepare the Docker image when used for execution (if it is not already prepared)\n",
|
||||
"run_config.auto_prepare_environment = True\n",
|
||||
"\n",
|
||||
"azureml_pip_packages = [\n",
|
||||
" 'azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',\n",
|
||||
" 'azureml-interpret', 'azureml-dataprep'\n",
|
||||
@@ -263,7 +290,7 @@
|
||||
" sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)\n",
|
||||
"if pandas_ver:\n",
|
||||
" pandas_dep = 'pandas=={}'.format(pandas_ver)\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"# Specify CondaDependencies obj\n",
|
||||
"# The CondaDependencies specifies the conda and pip packages that are installed in the environment\n",
|
||||
"# the submitted job is run in. Note the remote environment(s) needs to be similar to the local\n",
|
||||
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
|
||||
@@ -327,7 +354,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# retrieve model for visualization and deployment\n",
|
||||
"# Retrieve model for visualization and deployment\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"import joblib\n",
|
||||
"original_model = Model(ws, 'amlcompute_deploy_model')\n",
|
||||
@@ -341,7 +368,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# retrieve global explanation for visualization\n",
|
||||
"# Retrieve global explanation for visualization\n",
|
||||
"from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient\n",
|
||||
"\n",
|
||||
"# get model explanation data\n",
|
||||
@@ -355,7 +382,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# retrieve x_test for visualization\n",
|
||||
"# Retrieve x_test for visualization\n",
|
||||
"import joblib\n",
|
||||
"x_test_path = './x_test.pkl'\n",
|
||||
"run.download_file('x_test_ibm.pkl', output_file_path=x_test_path)\n",
|
||||
@@ -435,7 +462,7 @@
|
||||
" sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)\n",
|
||||
"if pandas_ver:\n",
|
||||
" pandas_dep = 'pandas=={}'.format(pandas_ver)\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"# Specify CondaDependencies obj\n",
|
||||
"# The CondaDependencies specifies the conda and pip packages that are installed in the environment\n",
|
||||
"# the submitted job is run in. Note the remote environment(s) needs to be similar to the local\n",
|
||||
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
|
||||
@@ -457,7 +484,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# retrieve scoring explainer for deployment\n",
|
||||
"# Retrieve scoring explainer for deployment\n",
|
||||
"scoring_explainer_model = Model(ws, 'IBM_attrition_explainer')"
|
||||
]
|
||||
},
|
||||
@@ -496,17 +523,17 @@
|
||||
"source": [
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"# create data to test service with\n",
|
||||
"# Create data to test service with\n",
|
||||
"examples = x_test[:4]\n",
|
||||
"input_data = examples.to_json()\n",
|
||||
"\n",
|
||||
"headers = {'Content-Type':'application/json'}\n",
|
||||
"\n",
|
||||
"# send request to service\n",
|
||||
"# Send request to service\n",
|
||||
"print(\"POST to url\", service.scoring_uri)\n",
|
||||
"resp = requests.post(service.scoring_uri, input_data, headers=headers)\n",
|
||||
"\n",
|
||||
"# can covert back to Python objects from json string if desired\n",
|
||||
"# Can covert back to Python objects from json string if desired\n",
|
||||
"print(\"prediction:\", resp.text)"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -716,7 +716,6 @@
|
||||
"\n",
|
||||
"trainWithAutomlStep = AutoMLStep(name='AutoML_Regression',\n",
|
||||
" automl_config=automl_config,\n",
|
||||
" passthru_automl_config=False,\n",
|
||||
" allow_reuse=True)\n",
|
||||
"print(\"trainWithAutomlStep created.\")"
|
||||
]
|
||||
|
||||
@@ -13,7 +13,7 @@ def init():
|
||||
global g_tf_sess
|
||||
|
||||
# pull down model from workspace
|
||||
model_path = Model.get_model_path("mnist")
|
||||
model_path = Model.get_model_path("mnist-prs")
|
||||
|
||||
# contruct graph to execute
|
||||
tf.reset_default_graph()
|
||||
|
||||
@@ -120,6 +120,6 @@ pipeline_run.wait_for_completion(show_output=True)
|
||||
|
||||
- [file-dataset-image-inference-mnist.ipynb](./file-dataset-image-inference-mnist.ipynb) demonstrates how to run batch inference on an MNIST dataset using FileDataset.
|
||||
- [tabular-dataset-inference-iris.ipynb](./tabular-dataset-inference-iris.ipynb) demonstrates how to run batch inference on an IRIS dataset using TabularDataset.
|
||||
- [pipeline-style-transfer.ipynb](../pipeline-style-transfer/pipeline-style-transfer.ipynb) demonstrates using ParallelRunStep in multi-step pipeline and using output from one step as input to ParallelRunStep.
|
||||
- [pipeline-style-transfer.ipynb](../pipeline-style-transfer/pipeline-style-transfer-parallel-run.ipynb) demonstrates using ParallelRunStep in multi-step pipeline and using output from one step as input to ParallelRunStep.
|
||||
|
||||

|
||||
|
||||
@@ -274,7 +274,7 @@
|
||||
"\n",
|
||||
"# register downloaded model \n",
|
||||
"model = Model.register(model_path = \"models/\",\n",
|
||||
" model_name = \"mnist\", # this is the name the model is registered as\n",
|
||||
" model_name = \"mnist-prs\", # this is the name the model is registered as\n",
|
||||
" tags = {'pretrained': \"mnist\"},\n",
|
||||
" description = \"Mnist trained tensorflow model\",\n",
|
||||
" workspace = ws)"
|
||||
@@ -474,8 +474,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"path_on_datastore = mnist_data.path('mnist/0.png')\n",
|
||||
"single_image_ds = Dataset.File.from_files(path=path_on_datastore, validate=False)\n",
|
||||
"single_image_ds._ensure_saved(ws)"
|
||||
"single_image_ds = Dataset.File.from_files(path=path_on_datastore, validate=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -227,7 +227,7 @@
|
||||
"\n",
|
||||
"# register downloaded model\n",
|
||||
"model = Model.register(model_path = \"iris_model.pkl/iris_model.pkl\",\n",
|
||||
" model_name = \"iris\", # this is the name the model is registered as\n",
|
||||
" model_name = \"iris-prs\", # this is the name the model is registered as\n",
|
||||
" tags = {'pretrained': \"iris\"},\n",
|
||||
" workspace = ws)"
|
||||
]
|
||||
@@ -332,7 +332,7 @@
|
||||
" append_row_file_name=\"iris_outputs.txt\",\n",
|
||||
" environment=predict_env,\n",
|
||||
" compute_target=compute_target, \n",
|
||||
" node_count=3,\n",
|
||||
" node_count=2,\n",
|
||||
" run_invocation_timeout=600\n",
|
||||
")"
|
||||
]
|
||||
@@ -356,7 +356,7 @@
|
||||
" inputs=[named_iris_ds],\n",
|
||||
" output=output_folder,\n",
|
||||
" parallel_run_config=parallel_run_config,\n",
|
||||
" arguments=['--model_name', 'iris'],\n",
|
||||
" arguments=['--model_name', 'iris-prs'],\n",
|
||||
" allow_reuse=True\n",
|
||||
")"
|
||||
]
|
||||
@@ -380,7 +380,7 @@
|
||||
"\n",
|
||||
"pipeline = Pipeline(workspace=ws, steps=[distributed_csv_iris_step])\n",
|
||||
"\n",
|
||||
"pipeline_run = Experiment(ws, 'iris').submit(pipeline)"
|
||||
"pipeline_run = Experiment(ws, 'iris-prs').submit(pipeline)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -0,0 +1,185 @@
|
||||
# Original source: https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
|
||||
from PIL import Image
|
||||
import torch
|
||||
from torchvision import transforms
|
||||
|
||||
|
||||
def load_image(filename, size=None, scale=None):
|
||||
img = Image.open(filename)
|
||||
if size is not None:
|
||||
img = img.resize((size, size), Image.ANTIALIAS)
|
||||
elif scale is not None:
|
||||
img = img.resize((int(img.size[0] / scale), int(img.size[1] / scale)), Image.ANTIALIAS)
|
||||
return img
|
||||
|
||||
|
||||
def save_image(filename, data):
|
||||
img = data.clone().clamp(0, 255).numpy()
|
||||
img = img.transpose(1, 2, 0).astype("uint8")
|
||||
img = Image.fromarray(img)
|
||||
img.save(filename)
|
||||
|
||||
|
||||
class TransformerNet(torch.nn.Module):
|
||||
def __init__(self):
|
||||
super(TransformerNet, self).__init__()
|
||||
# Initial convolution layers
|
||||
self.conv1 = ConvLayer(3, 32, kernel_size=9, stride=1)
|
||||
self.in1 = torch.nn.InstanceNorm2d(32, affine=True)
|
||||
self.conv2 = ConvLayer(32, 64, kernel_size=3, stride=2)
|
||||
self.in2 = torch.nn.InstanceNorm2d(64, affine=True)
|
||||
self.conv3 = ConvLayer(64, 128, kernel_size=3, stride=2)
|
||||
self.in3 = torch.nn.InstanceNorm2d(128, affine=True)
|
||||
# Residual layers
|
||||
self.res1 = ResidualBlock(128)
|
||||
self.res2 = ResidualBlock(128)
|
||||
self.res3 = ResidualBlock(128)
|
||||
self.res4 = ResidualBlock(128)
|
||||
self.res5 = ResidualBlock(128)
|
||||
# Upsampling Layers
|
||||
self.deconv1 = UpsampleConvLayer(128, 64, kernel_size=3, stride=1, upsample=2)
|
||||
self.in4 = torch.nn.InstanceNorm2d(64, affine=True)
|
||||
self.deconv2 = UpsampleConvLayer(64, 32, kernel_size=3, stride=1, upsample=2)
|
||||
self.in5 = torch.nn.InstanceNorm2d(32, affine=True)
|
||||
self.deconv3 = ConvLayer(32, 3, kernel_size=9, stride=1)
|
||||
# Non-linearities
|
||||
self.relu = torch.nn.ReLU()
|
||||
|
||||
def forward(self, X):
|
||||
y = self.relu(self.in1(self.conv1(X)))
|
||||
y = self.relu(self.in2(self.conv2(y)))
|
||||
y = self.relu(self.in3(self.conv3(y)))
|
||||
y = self.res1(y)
|
||||
y = self.res2(y)
|
||||
y = self.res3(y)
|
||||
y = self.res4(y)
|
||||
y = self.res5(y)
|
||||
y = self.relu(self.in4(self.deconv1(y)))
|
||||
y = self.relu(self.in5(self.deconv2(y)))
|
||||
y = self.deconv3(y)
|
||||
return y
|
||||
|
||||
|
||||
class ConvLayer(torch.nn.Module):
|
||||
def __init__(self, in_channels, out_channels, kernel_size, stride):
|
||||
super(ConvLayer, self).__init__()
|
||||
reflection_padding = kernel_size // 2
|
||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
||||
|
||||
def forward(self, x):
|
||||
out = self.reflection_pad(x)
|
||||
out = self.conv2d(out)
|
||||
return out
|
||||
|
||||
|
||||
class ResidualBlock(torch.nn.Module):
|
||||
"""ResidualBlock
|
||||
introduced in: https://arxiv.org/abs/1512.03385
|
||||
recommended architecture: http://torch.ch/blog/2016/02/04/resnets.html
|
||||
"""
|
||||
|
||||
def __init__(self, channels):
|
||||
super(ResidualBlock, self).__init__()
|
||||
self.conv1 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
||||
self.in1 = torch.nn.InstanceNorm2d(channels, affine=True)
|
||||
self.conv2 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
||||
self.in2 = torch.nn.InstanceNorm2d(channels, affine=True)
|
||||
self.relu = torch.nn.ReLU()
|
||||
|
||||
def forward(self, x):
|
||||
residual = x
|
||||
out = self.relu(self.in1(self.conv1(x)))
|
||||
out = self.in2(self.conv2(out))
|
||||
out = out + residual
|
||||
return out
|
||||
|
||||
|
||||
class UpsampleConvLayer(torch.nn.Module):
|
||||
"""UpsampleConvLayer
|
||||
Upsamples the input and then does a convolution. This method gives better results
|
||||
compared to ConvTranspose2d.
|
||||
ref: http://distill.pub/2016/deconv-checkerboard/
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels, out_channels, kernel_size, stride, upsample=None):
|
||||
super(UpsampleConvLayer, self).__init__()
|
||||
self.upsample = upsample
|
||||
if upsample:
|
||||
self.upsample_layer = torch.nn.Upsample(mode='nearest', scale_factor=upsample)
|
||||
reflection_padding = kernel_size // 2
|
||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
||||
|
||||
def forward(self, x):
|
||||
x_in = x
|
||||
if self.upsample:
|
||||
x_in = self.upsample_layer(x_in)
|
||||
out = self.reflection_pad(x_in)
|
||||
out = self.conv2d(out)
|
||||
return out
|
||||
|
||||
|
||||
def stylize(args):
|
||||
device = torch.device("cuda" if args.cuda else "cpu")
|
||||
with torch.no_grad():
|
||||
style_model = TransformerNet()
|
||||
state_dict = torch.load(os.path.join(args.model_dir, args.style + ".pth"))
|
||||
# remove saved deprecated running_* keys in InstanceNorm from the checkpoint
|
||||
for k in list(state_dict.keys()):
|
||||
if re.search(r'in\d+\.running_(mean|var)$', k):
|
||||
del state_dict[k]
|
||||
style_model.load_state_dict(state_dict)
|
||||
style_model.to(device)
|
||||
|
||||
filenames = os.listdir(args.content_dir)
|
||||
|
||||
for filename in filenames:
|
||||
print("Processing {}".format(filename))
|
||||
full_path = os.path.join(args.content_dir, filename)
|
||||
content_image = load_image(full_path, scale=args.content_scale)
|
||||
content_transform = transforms.Compose([
|
||||
transforms.ToTensor(),
|
||||
transforms.Lambda(lambda x: x.mul(255))
|
||||
])
|
||||
content_image = content_transform(content_image)
|
||||
content_image = content_image.unsqueeze(0).to(device)
|
||||
|
||||
output = style_model(content_image).cpu()
|
||||
|
||||
output_path = os.path.join(args.output_dir, filename)
|
||||
save_image(output_path, output[0])
|
||||
|
||||
|
||||
def main():
|
||||
arg_parser = argparse.ArgumentParser(description="parser for fast-neural-style")
|
||||
|
||||
arg_parser.add_argument("--content-scale", type=float, default=None,
|
||||
help="factor for scaling down the content image")
|
||||
arg_parser.add_argument("--model-dir", type=str, required=True,
|
||||
help="saved model to be used for stylizing the image.")
|
||||
arg_parser.add_argument("--cuda", type=int, required=True,
|
||||
help="set it to 1 for running on GPU, 0 for CPU")
|
||||
arg_parser.add_argument("--style", type=str,
|
||||
help="style name")
|
||||
|
||||
arg_parser.add_argument("--content-dir", type=str, required=True,
|
||||
help="directory holding the images")
|
||||
arg_parser.add_argument("--output-dir", type=str, required=True,
|
||||
help="directory holding the output images")
|
||||
args = arg_parser.parse_args()
|
||||
|
||||
if args.cuda and not torch.cuda.is_available():
|
||||
print("ERROR: cuda is not available, try running on CPU")
|
||||
sys.exit(1)
|
||||
os.makedirs(args.output_dir, exist_ok=True)
|
||||
stylize(args)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,207 @@
|
||||
# Original source: https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
|
||||
from PIL import Image
|
||||
import torch
|
||||
from torchvision import transforms
|
||||
|
||||
from mpi4py import MPI
|
||||
|
||||
|
||||
def load_image(filename, size=None, scale=None):
|
||||
img = Image.open(filename)
|
||||
if size is not None:
|
||||
img = img.resize((size, size), Image.ANTIALIAS)
|
||||
elif scale is not None:
|
||||
img = img.resize((int(img.size[0] / scale), int(img.size[1] / scale)), Image.ANTIALIAS)
|
||||
return img
|
||||
|
||||
|
||||
def save_image(filename, data):
|
||||
img = data.clone().clamp(0, 255).numpy()
|
||||
img = img.transpose(1, 2, 0).astype("uint8")
|
||||
img = Image.fromarray(img)
|
||||
img.save(filename)
|
||||
|
||||
|
||||
class TransformerNet(torch.nn.Module):
|
||||
def __init__(self):
|
||||
super(TransformerNet, self).__init__()
|
||||
# Initial convolution layers
|
||||
self.conv1 = ConvLayer(3, 32, kernel_size=9, stride=1)
|
||||
self.in1 = torch.nn.InstanceNorm2d(32, affine=True)
|
||||
self.conv2 = ConvLayer(32, 64, kernel_size=3, stride=2)
|
||||
self.in2 = torch.nn.InstanceNorm2d(64, affine=True)
|
||||
self.conv3 = ConvLayer(64, 128, kernel_size=3, stride=2)
|
||||
self.in3 = torch.nn.InstanceNorm2d(128, affine=True)
|
||||
# Residual layers
|
||||
self.res1 = ResidualBlock(128)
|
||||
self.res2 = ResidualBlock(128)
|
||||
self.res3 = ResidualBlock(128)
|
||||
self.res4 = ResidualBlock(128)
|
||||
self.res5 = ResidualBlock(128)
|
||||
# Upsampling Layers
|
||||
self.deconv1 = UpsampleConvLayer(128, 64, kernel_size=3, stride=1, upsample=2)
|
||||
self.in4 = torch.nn.InstanceNorm2d(64, affine=True)
|
||||
self.deconv2 = UpsampleConvLayer(64, 32, kernel_size=3, stride=1, upsample=2)
|
||||
self.in5 = torch.nn.InstanceNorm2d(32, affine=True)
|
||||
self.deconv3 = ConvLayer(32, 3, kernel_size=9, stride=1)
|
||||
# Non-linearities
|
||||
self.relu = torch.nn.ReLU()
|
||||
|
||||
def forward(self, X):
|
||||
y = self.relu(self.in1(self.conv1(X)))
|
||||
y = self.relu(self.in2(self.conv2(y)))
|
||||
y = self.relu(self.in3(self.conv3(y)))
|
||||
y = self.res1(y)
|
||||
y = self.res2(y)
|
||||
y = self.res3(y)
|
||||
y = self.res4(y)
|
||||
y = self.res5(y)
|
||||
y = self.relu(self.in4(self.deconv1(y)))
|
||||
y = self.relu(self.in5(self.deconv2(y)))
|
||||
y = self.deconv3(y)
|
||||
return y
|
||||
|
||||
|
||||
class ConvLayer(torch.nn.Module):
|
||||
def __init__(self, in_channels, out_channels, kernel_size, stride):
|
||||
super(ConvLayer, self).__init__()
|
||||
reflection_padding = kernel_size // 2
|
||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
||||
|
||||
def forward(self, x):
|
||||
out = self.reflection_pad(x)
|
||||
out = self.conv2d(out)
|
||||
return out
|
||||
|
||||
|
||||
class ResidualBlock(torch.nn.Module):
|
||||
"""ResidualBlock
|
||||
introduced in: https://arxiv.org/abs/1512.03385
|
||||
recommended architecture: http://torch.ch/blog/2016/02/04/resnets.html
|
||||
"""
|
||||
|
||||
def __init__(self, channels):
|
||||
super(ResidualBlock, self).__init__()
|
||||
self.conv1 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
||||
self.in1 = torch.nn.InstanceNorm2d(channels, affine=True)
|
||||
self.conv2 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
||||
self.in2 = torch.nn.InstanceNorm2d(channels, affine=True)
|
||||
self.relu = torch.nn.ReLU()
|
||||
|
||||
def forward(self, x):
|
||||
residual = x
|
||||
out = self.relu(self.in1(self.conv1(x)))
|
||||
out = self.in2(self.conv2(out))
|
||||
out = out + residual
|
||||
return out
|
||||
|
||||
|
||||
class UpsampleConvLayer(torch.nn.Module):
|
||||
"""UpsampleConvLayer
|
||||
Upsamples the input and then does a convolution. This method gives better results
|
||||
compared to ConvTranspose2d.
|
||||
ref: http://distill.pub/2016/deconv-checkerboard/
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels, out_channels, kernel_size, stride, upsample=None):
|
||||
super(UpsampleConvLayer, self).__init__()
|
||||
self.upsample = upsample
|
||||
if upsample:
|
||||
self.upsample_layer = torch.nn.Upsample(mode='nearest', scale_factor=upsample)
|
||||
reflection_padding = kernel_size // 2
|
||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
||||
|
||||
def forward(self, x):
|
||||
x_in = x
|
||||
if self.upsample:
|
||||
x_in = self.upsample_layer(x_in)
|
||||
out = self.reflection_pad(x_in)
|
||||
out = self.conv2d(out)
|
||||
return out
|
||||
|
||||
|
||||
def stylize(args, comm):
|
||||
|
||||
rank = comm.Get_rank()
|
||||
size = comm.Get_size()
|
||||
|
||||
device = torch.device("cuda" if args.cuda else "cpu")
|
||||
with torch.no_grad():
|
||||
style_model = TransformerNet()
|
||||
state_dict = torch.load(os.path.join(args.model_dir, args.style + ".pth"))
|
||||
# remove saved deprecated running_* keys in InstanceNorm from the checkpoint
|
||||
for k in list(state_dict.keys()):
|
||||
if re.search(r'in\d+\.running_(mean|var)$', k):
|
||||
del state_dict[k]
|
||||
style_model.load_state_dict(state_dict)
|
||||
style_model.to(device)
|
||||
|
||||
filenames = os.listdir(args.content_dir)
|
||||
filenames = sorted(filenames)
|
||||
partition_size = len(filenames) // size
|
||||
partitioned_filenames = filenames[rank * partition_size: (rank + 1) * partition_size]
|
||||
print("RANK {} - is processing {} images out of the total {}".format(rank, len(partitioned_filenames),
|
||||
len(filenames)))
|
||||
|
||||
output_paths = []
|
||||
for filename in partitioned_filenames:
|
||||
# print("Processing {}".format(filename))
|
||||
full_path = os.path.join(args.content_dir, filename)
|
||||
content_image = load_image(full_path, scale=args.content_scale)
|
||||
content_transform = transforms.Compose([
|
||||
transforms.ToTensor(),
|
||||
transforms.Lambda(lambda x: x.mul(255))
|
||||
])
|
||||
content_image = content_transform(content_image)
|
||||
content_image = content_image.unsqueeze(0).to(device)
|
||||
|
||||
output = style_model(content_image).cpu()
|
||||
|
||||
output_path = os.path.join(args.output_dir, filename)
|
||||
save_image(output_path, output[0])
|
||||
|
||||
output_paths.append(output_path)
|
||||
|
||||
print("RANK {} - number of pre-aggregated output files {}".format(rank, len(output_paths)))
|
||||
|
||||
output_paths_list = comm.gather(output_paths, root=0)
|
||||
|
||||
if rank == 0:
|
||||
print("RANK {} - number of aggregated output files {}".format(rank, len(output_paths_list)))
|
||||
print("RANK {} - end".format(rank))
|
||||
|
||||
|
||||
def main():
|
||||
arg_parser = argparse.ArgumentParser(description="parser for fast-neural-style")
|
||||
|
||||
arg_parser.add_argument("--content-scale", type=float, default=None,
|
||||
help="factor for scaling down the content image")
|
||||
arg_parser.add_argument("--model-dir", type=str, required=True,
|
||||
help="saved model to be used for stylizing the image.")
|
||||
arg_parser.add_argument("--cuda", type=int, required=True,
|
||||
help="set it to 1 for running on GPU, 0 for CPU")
|
||||
arg_parser.add_argument("--style", type=str, help="style name")
|
||||
arg_parser.add_argument("--content-dir", type=str, required=True,
|
||||
help="directory holding the images")
|
||||
arg_parser.add_argument("--output-dir", type=str, required=True,
|
||||
help="directory holding the output images")
|
||||
args = arg_parser.parse_args()
|
||||
|
||||
comm = MPI.COMM_WORLD
|
||||
|
||||
if args.cuda and not torch.cuda.is_available():
|
||||
print("ERROR: cuda is not available, try running on CPU")
|
||||
sys.exit(1)
|
||||
os.makedirs(args.output_dir, exist_ok=True)
|
||||
stylize(args, comm)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,728 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Neural style transfer on video\n",
|
||||
"Using modified code from `pytorch`'s neural style [example](https://pytorch.org/tutorials/advanced/neural_style_tutorial.html), we show how to setup a pipeline for doing style transfer on video. The pipeline has following steps:\n",
|
||||
"1. Split a video into images\n",
|
||||
"2. Run neural style on each image using one of the provided models (from `pytorch` pretrained models for this example).\n",
|
||||
"3. Stitch the image back into a video."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize Workspace\n",
|
||||
"\n",
|
||||
"Initialize a workspace object from persisted configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from azureml.core import Workspace, Experiment\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
|
||||
"\n",
|
||||
"scripts_folder = \"mpi_scripts\"\n",
|
||||
"\n",
|
||||
"if not os.path.isdir(scripts_folder):\n",
|
||||
" os.mkdir(scripts_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
|
||||
"from azureml.core.datastore import Datastore\n",
|
||||
"from azureml.data.data_reference import DataReference\n",
|
||||
"from azureml.pipeline.core import Pipeline, PipelineData\n",
|
||||
"from azureml.pipeline.steps import PythonScriptStep, MpiStep\n",
|
||||
"from azureml.core.runconfig import CondaDependencies, RunConfiguration\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Create or use existing compute"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# AmlCompute\n",
|
||||
"cpu_cluster_name = \"cpu-cluster\"\n",
|
||||
"try:\n",
|
||||
" cpu_cluster = AmlCompute(ws, cpu_cluster_name)\n",
|
||||
" print(\"found existing cluster.\")\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print(\"creating new cluster\")\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_v2\",\n",
|
||||
" max_nodes = 1)\n",
|
||||
"\n",
|
||||
" # create the cluster\n",
|
||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, provisioning_config)\n",
|
||||
" cpu_cluster.wait_for_completion(show_output=True)\n",
|
||||
" \n",
|
||||
"# AmlCompute\n",
|
||||
"gpu_cluster_name = \"gpu-cluster\"\n",
|
||||
"try:\n",
|
||||
" gpu_cluster = AmlCompute(ws, gpu_cluster_name)\n",
|
||||
" print(\"found existing cluster.\")\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print(\"creating new cluster\")\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_NC6\",\n",
|
||||
" max_nodes = 3)\n",
|
||||
"\n",
|
||||
" # create the cluster\n",
|
||||
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, provisioning_config)\n",
|
||||
" gpu_cluster.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Python Scripts\n",
|
||||
"We use an edited version of `neural_style_mpi.py` (original is [here](https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py)). Scripts to split and stitch the video are thin wrappers to calls to `ffmpeg`. These scripts are also located in the \"scripts_folder\".\n",
|
||||
"\n",
|
||||
"We install `ffmpeg` through conda dependencies."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile $scripts_folder/process_video.py\n",
|
||||
"import argparse\n",
|
||||
"import glob\n",
|
||||
"import os\n",
|
||||
"import subprocess\n",
|
||||
"\n",
|
||||
"parser = argparse.ArgumentParser(description=\"Process input video\")\n",
|
||||
"parser.add_argument('--input_video', required=True)\n",
|
||||
"parser.add_argument('--output_audio', required=True)\n",
|
||||
"parser.add_argument('--output_images', required=True)\n",
|
||||
"\n",
|
||||
"args = parser.parse_args()\n",
|
||||
"\n",
|
||||
"os.makedirs(args.output_audio, exist_ok=True)\n",
|
||||
"os.makedirs(args.output_images, exist_ok=True)\n",
|
||||
"\n",
|
||||
"subprocess.run(\"ffmpeg -i {} {}/video.aac\"\n",
|
||||
" .format(args.input_video, args.output_audio),\n",
|
||||
" shell=True, check=True\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"subprocess.run(\"ffmpeg -i {} {}/%05d_video.jpg -hide_banner\"\n",
|
||||
" .format(args.input_video, args.output_images),\n",
|
||||
" shell=True, check=True\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile $scripts_folder/stitch_video.py\n",
|
||||
"import argparse\n",
|
||||
"import os\n",
|
||||
"import subprocess\n",
|
||||
"\n",
|
||||
"parser = argparse.ArgumentParser(description=\"Process input video\")\n",
|
||||
"parser.add_argument('--images_dir', required=True)\n",
|
||||
"parser.add_argument('--input_audio', required=True)\n",
|
||||
"parser.add_argument('--output_dir', required=True)\n",
|
||||
"\n",
|
||||
"args = parser.parse_args()\n",
|
||||
"\n",
|
||||
"os.makedirs(args.output_dir, exist_ok=True)\n",
|
||||
"\n",
|
||||
"subprocess.run(\"ffmpeg -framerate 30 -i {}/%05d_video.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p \"\n",
|
||||
" \"-y {}/video_without_audio.mp4\"\n",
|
||||
" .format(args.images_dir, args.output_dir),\n",
|
||||
" shell=True, check=True\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"subprocess.run(\"ffmpeg -i {}/video_without_audio.mp4 -i {}/video.aac -map 0:0 -map 1:0 -vcodec \"\n",
|
||||
" \"copy -acodec copy -y {}/video_with_audio.mp4\"\n",
|
||||
" .format(args.output_dir, args.input_audio, args.output_dir),\n",
|
||||
" shell=True, check=True\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The sample video **organutan.mp4** is stored at a publicly shared datastore. We are registering the datastore below. If you want to take a look at the original video, click here. (https://pipelinedata.blob.core.windows.net/sample-videos/orangutan.mp4)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# datastore for input video\n",
|
||||
"account_name = \"pipelinedata\"\n",
|
||||
"video_ds = Datastore.register_azure_blob_container(ws, \"videos\", \"sample-videos\",\n",
|
||||
" account_name=account_name, overwrite=True)\n",
|
||||
"\n",
|
||||
"# datastore for models\n",
|
||||
"models_ds = Datastore.register_azure_blob_container(ws, \"models\", \"styletransfer\", \n",
|
||||
" account_name=\"pipelinedata\", \n",
|
||||
" overwrite=True)\n",
|
||||
" \n",
|
||||
"# downloaded models from https://pytorch.org/tutorials/advanced/neural_style_tutorial.html are kept here\n",
|
||||
"models_dir = DataReference(data_reference_name=\"models\", datastore=models_ds, \n",
|
||||
" path_on_datastore=\"saved_models\", mode=\"download\")\n",
|
||||
"\n",
|
||||
"# the default blob store attached to a workspace\n",
|
||||
"default_datastore = ws.get_default_datastore()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Sample video"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"video_name=os.getenv(\"STYLE_TRANSFER_VIDEO_NAME\", \"orangutan.mp4\") \n",
|
||||
"orangutan_video = DataReference(datastore=video_ds,\n",
|
||||
" data_reference_name=\"video\",\n",
|
||||
" path_on_datastore=video_name, mode=\"download\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"cd = CondaDependencies()\n",
|
||||
"\n",
|
||||
"cd.add_channel(\"conda-forge\")\n",
|
||||
"cd.add_conda_package(\"ffmpeg\")\n",
|
||||
"\n",
|
||||
"cd.add_channel(\"pytorch\")\n",
|
||||
"cd.add_conda_package(\"pytorch\")\n",
|
||||
"cd.add_conda_package(\"torchvision\")\n",
|
||||
"\n",
|
||||
"# Runconfig\n",
|
||||
"amlcompute_run_config = RunConfiguration(conda_dependencies=cd)\n",
|
||||
"amlcompute_run_config.environment.docker.enabled = True\n",
|
||||
"amlcompute_run_config.environment.docker.base_image = \"pytorch/pytorch\"\n",
|
||||
"amlcompute_run_config.environment.spark.precache_packages = False"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ffmpeg_audio = PipelineData(name=\"ffmpeg_audio\", datastore=default_datastore)\n",
|
||||
"ffmpeg_images = PipelineData(name=\"ffmpeg_images\", datastore=default_datastore)\n",
|
||||
"processed_images = PipelineData(name=\"processed_images\", datastore=default_datastore)\n",
|
||||
"output_video = PipelineData(name=\"output_video\", datastore=default_datastore)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Define tweakable parameters to pipeline\n",
|
||||
"These parameters can be changed when the pipeline is published and rerun from a REST call"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core.graph import PipelineParameter\n",
|
||||
"# create a parameter for style (one of \"candy\", \"mosaic\", \"rain_princess\", \"udnie\") to transfer the images to\n",
|
||||
"style_param = PipelineParameter(name=\"style\", default_value=\"mosaic\")\n",
|
||||
"# create a parameter for the number of nodes to use in step no. 2 (style transfer)\n",
|
||||
"nodecount_param = PipelineParameter(name=\"nodecount\", default_value=1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"split_video_step = PythonScriptStep(\n",
|
||||
" name=\"split video\",\n",
|
||||
" script_name=\"process_video.py\",\n",
|
||||
" arguments=[\"--input_video\", orangutan_video,\n",
|
||||
" \"--output_audio\", ffmpeg_audio,\n",
|
||||
" \"--output_images\", ffmpeg_images,\n",
|
||||
" ],\n",
|
||||
" compute_target=cpu_cluster,\n",
|
||||
" inputs=[orangutan_video],\n",
|
||||
" outputs=[ffmpeg_images, ffmpeg_audio],\n",
|
||||
" runconfig=amlcompute_run_config,\n",
|
||||
" source_directory=scripts_folder\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# create a MPI step for distributing style transfer step across multiple nodes in AmlCompute \n",
|
||||
"# using 'nodecount_param' PipelineParameter\n",
|
||||
"distributed_style_transfer_step = MpiStep(\n",
|
||||
" name=\"mpi style transfer\",\n",
|
||||
" script_name=\"neural_style_mpi.py\",\n",
|
||||
" arguments=[\"--content-dir\", ffmpeg_images,\n",
|
||||
" \"--output-dir\", processed_images,\n",
|
||||
" \"--model-dir\", models_dir,\n",
|
||||
" \"--style\", style_param,\n",
|
||||
" \"--cuda\", 1\n",
|
||||
" ],\n",
|
||||
" compute_target=gpu_cluster,\n",
|
||||
" node_count=nodecount_param, \n",
|
||||
" process_count_per_node=1,\n",
|
||||
" inputs=[models_dir, ffmpeg_images],\n",
|
||||
" outputs=[processed_images],\n",
|
||||
" pip_packages=[\"mpi4py\", \"torch\", \"torchvision\"],\n",
|
||||
" use_gpu=True,\n",
|
||||
" source_directory=scripts_folder\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"stitch_video_step = PythonScriptStep(\n",
|
||||
" name=\"stitch\",\n",
|
||||
" script_name=\"stitch_video.py\",\n",
|
||||
" arguments=[\"--images_dir\", processed_images, \n",
|
||||
" \"--input_audio\", ffmpeg_audio, \n",
|
||||
" \"--output_dir\", output_video],\n",
|
||||
" compute_target=cpu_cluster,\n",
|
||||
" inputs=[processed_images, ffmpeg_audio],\n",
|
||||
" outputs=[output_video],\n",
|
||||
" runconfig=amlcompute_run_config,\n",
|
||||
" source_directory=scripts_folder\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Run the pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline = Pipeline(workspace=ws, steps=[stitch_video_step])\n",
|
||||
"# submit the pipeline and provide values for the PipelineParameters used in the pipeline\n",
|
||||
"pipeline_run = Experiment(ws, 'style_transfer').submit(pipeline, pipeline_parameters={\"style\": \"mosaic\", \"nodecount\": 3})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Monitor using widget"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(pipeline_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Downloads the video in `output_video` folder"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Download output video"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def download_video(run, target_dir=None):\n",
|
||||
" stitch_run = run.find_step_run(\"stitch\")[0]\n",
|
||||
" port_data = stitch_run.get_output_data(\"output_video\")\n",
|
||||
" port_data.download(target_dir, show_progress=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline_run.wait_for_completion()\n",
|
||||
"download_video(pipeline_run, \"output_video_mosaic\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Publish pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"published_pipeline = pipeline_run.publish_pipeline(\n",
|
||||
" name=\"batch score style transfer\", description=\"style transfer\", version=\"1.0\")\n",
|
||||
"\n",
|
||||
"published_pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get published pipeline\n",
|
||||
"\n",
|
||||
"You can get the published pipeline using **pipeline id**.\n",
|
||||
"\n",
|
||||
"To get all the published pipelines for a given workspace(ws): \n",
|
||||
"```css\n",
|
||||
"all_pub_pipelines = PublishedPipeline.get_all(ws)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core import PublishedPipeline\n",
|
||||
"\n",
|
||||
"pipeline_id = published_pipeline.id # use your published pipeline id\n",
|
||||
"published_pipeline = PublishedPipeline.get(ws, pipeline_id)\n",
|
||||
"\n",
|
||||
"published_pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Re-run pipeline through REST calls for other styles"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get AAD token\n",
|
||||
"[This notebook](https://aka.ms/pl-restep-auth) shows how to authenticate to AML workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"auth = InteractiveLoginAuthentication()\n",
|
||||
"aad_token = auth.get_authentication_header()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get endpoint URL"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"rest_endpoint = published_pipeline.endpoint"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Send request and monitor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run the pipeline using PipelineParameter values style='candy' and nodecount=2"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = requests.post(rest_endpoint, \n",
|
||||
" headers=aad_token,\n",
|
||||
" json={\"ExperimentName\": \"style_transfer\",\n",
|
||||
" \"ParameterAssignments\": {\"style\": \"candy\", \"nodecount\": 2}})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"try:\n",
|
||||
" response.raise_for_status()\n",
|
||||
"except Exception: \n",
|
||||
" raise Exception('Received bad response from the endpoint: {}\\n'\n",
|
||||
" 'Response Code: {}\\n'\n",
|
||||
" 'Headers: {}\\n'\n",
|
||||
" 'Content: {}'.format(rest_endpoint, response.status_code, response.headers, response.content))\n",
|
||||
"\n",
|
||||
"run_id = response.json().get('Id')\n",
|
||||
"print('Submitted pipeline run: ', run_id)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core.run import PipelineRun\n",
|
||||
"published_pipeline_run_candy = PipelineRun(ws.experiments[\"style_transfer\"], run_id)\n",
|
||||
"RunDetails(published_pipeline_run_candy).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run the pipeline using PipelineParameter values style='rain_princess' and nodecount=3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = requests.post(rest_endpoint, \n",
|
||||
" headers=aad_token,\n",
|
||||
" json={\"ExperimentName\": \"style_transfer\",\n",
|
||||
" \"ParameterAssignments\": {\"style\": \"rain_princess\", \"nodecount\": 3}})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"try:\n",
|
||||
" response.raise_for_status()\n",
|
||||
"except Exception: \n",
|
||||
" raise Exception('Received bad response from the endpoint: {}\\n'\n",
|
||||
" 'Response Code: {}\\n'\n",
|
||||
" 'Headers: {}\\n'\n",
|
||||
" 'Content: {}'.format(rest_endpoint, response.status_code, response.headers, response.content))\n",
|
||||
"\n",
|
||||
"run_id = response.json().get('Id')\n",
|
||||
"print('Submitted pipeline run: ', run_id)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"published_pipeline_run_rain = PipelineRun(ws.experiments[\"style_transfer\"], run_id)\n",
|
||||
"RunDetails(published_pipeline_run_rain).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run the pipeline using PipelineParameter values style='udnie' and nodecount=4"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = requests.post(rest_endpoint, \n",
|
||||
" headers=aad_token,\n",
|
||||
" json={\"ExperimentName\": \"style_transfer\",\n",
|
||||
" \"ParameterAssignments\": {\"style\": \"udnie\", \"nodecount\": 3}})\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"try:\n",
|
||||
" response.raise_for_status()\n",
|
||||
"except Exception: \n",
|
||||
" raise Exception('Received bad response from the endpoint: {}\\n'\n",
|
||||
" 'Response Code: {}\\n'\n",
|
||||
" 'Headers: {}\\n'\n",
|
||||
" 'Content: {}'.format(rest_endpoint, response.status_code, response.headers, response.content))\n",
|
||||
"\n",
|
||||
"run_id = response.json().get('Id')\n",
|
||||
"print('Submitted pipeline run: ', run_id)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"published_pipeline_run_udnie = PipelineRun(ws.experiments[\"style_transfer\"], run_id)\n",
|
||||
"RunDetails(published_pipeline_run_udnie).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Download output from re-run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"published_pipeline_run_candy.wait_for_completion()\n",
|
||||
"published_pipeline_run_rain.wait_for_completion()\n",
|
||||
"published_pipeline_run_udnie.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"download_video(published_pipeline_run_candy, target_dir=\"output_video_candy\")\n",
|
||||
"download_video(published_pipeline_run_rain, target_dir=\"output_video_rain_princess\")\n",
|
||||
"download_video(published_pipeline_run_udnie, target_dir=\"output_video_udnie\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "balapv mabables"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
name: pipeline-style-transfer
|
||||
name: pipeline-style-transfer-mpi
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -13,7 +13,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -0,0 +1,7 @@
|
||||
name: pipeline-style-transfer-parallel-run
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-pipeline-steps
|
||||
- azureml-widgets
|
||||
- requests
|
||||
@@ -456,6 +456,24 @@
|
||||
"monitor.enable_schedule()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Delete the DataDriftDetector\n",
|
||||
"\n",
|
||||
"Invoking the `delete()` method on the object deletes the the drift monitor permanently and cannot be undone. You will no longer be able to find it in the UI and the `list()` or `get()` methods. The object on which delete() was called will have its state set to deleted and name suffixed with deleted. The baseline and target datasets and model data that was collected, if any, are not deleted. The compute is not deleted. The DataDrift schedule pipeline is disabled and archived."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"monitor.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -35,6 +35,7 @@ Using these samples, you will learn how to do the following.
|
||||
| [cartpole_sc.ipynb](cartpole-on-single-compute/cartpole_sc.ipynb) | Notebook to train a Cartpole playing agent on an Azure Machine Learning Compute Cluster (single node) |
|
||||
| [pong_rllib.ipynb](atari-on-distributed-compute/pong_rllib.ipynb) | Notebook for distributed training of Pong agent using RLlib on multiple compute targets |
|
||||
| [minecraft.ipynb](minecraft-on-distributed-compute/minecraft.ipynb) | Notebook to train an agent to navigate through a lava maze in the Minecraft game |
|
||||
| [particle.ipynb](multiagent-particle-envs/particle.ipynb) | Notebook to train policies in a multiagent cooperative navigation scenario based on OpenAI's Particle environments |
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
"source": [
|
||||
"# Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Compute Instance\n",
|
||||
"\n",
|
||||
"Reinforcement Learning in Azure Machine Learning is a managed service for running reinforcement learning training and simulation. With Reinforcement Learning in Azure Machine Learning, data scientists can start developing reinforcement learning systems on one machine, and scale to compute targets with 100\u00e2\u20ac\u2122s of nodes if needed.\n",
|
||||
"Reinforcement Learning in Azure Machine Learning is a managed service for running reinforcement learning training and simulation. With Reinforcement Learning in Azure Machine Learning, data scientists can start developing reinforcement learning systems on one machine, and scale to compute targets with 100s of nodes if needed.\n",
|
||||
"\n",
|
||||
"This example shows how to use Reinforcement Learning in Azure Machine Learning to train a Cartpole playing agent on a compute instance."
|
||||
]
|
||||
@@ -86,7 +86,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"print(\"Azure Machine Learning SDK Version: \", azureml.core.VERSION)"
|
||||
"print(\"Azure Machine Learning SDK Version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -128,24 +128,12 @@
|
||||
"source": [
|
||||
"import os.path\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Get information about the currently running compute instance (notebook VM), like its name and prefix.\n",
|
||||
"def load_nbvm():\n",
|
||||
" if not os.path.isfile(\"/mnt/azmnt/.nbvm\"):\n",
|
||||
" return None\n",
|
||||
" with open(\"/mnt/azmnt/.nbvm\", 'r') as file:\n",
|
||||
" return {key:value for (key, value) in [line.strip().split('=') for line in file]}\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Get information about the capabilities of an azureml.core.compute.AmlCompute target\n",
|
||||
"# In particular how much RAM + GPU + HDD it has.\n",
|
||||
"def get_compute_size(self, workspace):\n",
|
||||
" for size in self.supported_vmsizes(workspace):\n",
|
||||
" if(size['name'].upper() == self.vm_size):\n",
|
||||
" return size\n",
|
||||
"\n",
|
||||
"azureml.core.compute.ComputeTarget.size = get_compute_size\n",
|
||||
"del(get_compute_size)"
|
||||
" with open(\"/mnt/azmnt/.nbvm\", 'r') as nbvm_file:\n",
|
||||
" return {key:value for (key, value) in line.strip().split('=') for line in nbvm_file}\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -161,7 +149,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, ComputeInstance\n",
|
||||
"from azureml.core.compute import ComputeInstance\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# Load current compute instance info\n",
|
||||
@@ -188,9 +176,7 @@
|
||||
"compute_target = ws.compute_targets[instance_name]\n",
|
||||
"\n",
|
||||
"print(\"Compute target status:\")\n",
|
||||
"print(compute_target.get_status().serialize())\n",
|
||||
"print(\"Compute target size:\")\n",
|
||||
"print(compute_target.size(ws))"
|
||||
"print(compute_target.get_status().serialize())\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -525,7 +511,6 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Find checkpoints and last checkpoint number\n",
|
||||
"from os import path\n",
|
||||
"checkpoint_files = [\n",
|
||||
" os.path.basename(file) for file in training_artifacts_ds.to_path() \\\n",
|
||||
" if os.path.basename(file).startswith('checkpoint-') and \\\n",
|
||||
@@ -629,8 +614,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"\n",
|
||||
"RunDetails(rollout_run).show()"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -4,4 +4,3 @@ dependencies:
|
||||
- azureml-sdk
|
||||
- azureml-contrib-reinforcementlearning
|
||||
- azureml-widgets
|
||||
- azureml-dataprep
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
"source": [
|
||||
"# Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute\n",
|
||||
"\n",
|
||||
"Reinforcement Learning in Azure Machine Learning is a managed service for running reinforcement learning training and simulation. With Reinforcement Learning in Azure Machine Learning, data scientists can start developing reinforcement learning systems on one machine, and scale to compute targets with 100\u00e2\u20ac\u2122s of nodes if needed.\n",
|
||||
"Reinforcement Learning in Azure Machine Learning is a managed service for running reinforcement learning training and simulation. With Reinforcement Learning in Azure Machine Learning, data scientists can start developing reinforcement learning systems on one machine, and scale to compute targets with 100s of nodes if needed.\n",
|
||||
"\n",
|
||||
"This example shows how to use Reinforcement Learning in Azure Machine Learning to train a Cartpole playing agent on a single compute. "
|
||||
]
|
||||
@@ -87,7 +87,7 @@
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"Azure Machine Learning SDK Version: \", azureml.core.VERSION)"
|
||||
"print(\"Azure Machine Learning SDK Version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -248,9 +248,8 @@
|
||||
" # Ray's video capture support requires to run everything under a headless display driver called (xvfb).\n",
|
||||
" # There are two parts to this:\n",
|
||||
" # 1. Use a custom docker file with proper instructions to install xvfb, ffmpeg, python-opengl\n",
|
||||
" # and other dependencies. \n",
|
||||
" # TODO: Add these instructions to default reinforcement learning base image and drop this docker file.\n",
|
||||
" \n",
|
||||
" # and other dependencies.\n",
|
||||
" \n",
|
||||
" with open(\"files/docker/Dockerfile\", \"r\") as f:\n",
|
||||
" dockerfile=f.read()\n",
|
||||
"\n",
|
||||
@@ -546,11 +545,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from os import path\n",
|
||||
"from distutils import dir_util\n",
|
||||
"import shutil\n",
|
||||
"from files.utils import misc\n",
|
||||
"\n",
|
||||
"# A helper function to download (copy) movies from a dataset to local directory\n",
|
||||
"def download_movies(artifacts_ds, movies, destination):\n",
|
||||
@@ -560,7 +555,7 @@
|
||||
" dir_util.mkpath(destination)\n",
|
||||
" \n",
|
||||
" try:\n",
|
||||
" pirnt(\"Trying mounting dataset and copying movies.\")\n",
|
||||
" print(\"Trying mounting dataset and copying movies.\")\n",
|
||||
" # Note: We assume movie paths start with '\\'\n",
|
||||
" mount_context = artifacts_ds.mount()\n",
|
||||
" mount_context.start()\n",
|
||||
@@ -568,11 +563,11 @@
|
||||
" print('Copying {} ...'.format(movie))\n",
|
||||
" shutil.copy2(path.join(mount_context.mount_point, movie[1:]), destination)\n",
|
||||
" mount_context.stop()\n",
|
||||
" except:\n",
|
||||
" print(\"Mounting failed! Going with dataset download.\")\n",
|
||||
" for i, file in enumerate(artifacts_ds.to_path()):\n",
|
||||
" if file in movies:\n",
|
||||
" print('Downloading {} ...'.format(file))\n",
|
||||
" except OSError as e:\n",
|
||||
" print(\"Mounting failed with error '{0}'. Going with dataset download.\".format(e))\n",
|
||||
" for i, artifact in enumerate(artifacts_ds.to_path()):\n",
|
||||
" if artifact in movies:\n",
|
||||
" print('Downloading {} ...'.format(artifact))\n",
|
||||
" artifacts_ds.skip(i).take(1).download(target_path=destination, overwrite=True)\n",
|
||||
" \n",
|
||||
" print('Downloading movies completed!')\n",
|
||||
@@ -581,14 +576,14 @@
|
||||
"# A helper function to find movies in a directory\n",
|
||||
"def find_movies(movie_path):\n",
|
||||
" print(\"Looking in path:\", movie_path)\n",
|
||||
" mp4_files = []\n",
|
||||
" mp4_movies = []\n",
|
||||
" for root, _, files in os.walk(movie_path):\n",
|
||||
" for file in files:\n",
|
||||
" if file.endswith('.mp4'):\n",
|
||||
" mp4_files.append(path.join(root, file))\n",
|
||||
" print('Found {} movies'.format(len(mp4_files)))\n",
|
||||
" for name in files:\n",
|
||||
" if name.endswith('.mp4'):\n",
|
||||
" mp4_movies.append(path.join(root, name))\n",
|
||||
" print('Found {} movies'.format(len(mp4_movies)))\n",
|
||||
"\n",
|
||||
" return mp4_files\n",
|
||||
" return mp4_movies\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# A helper function to display a movie\n",
|
||||
@@ -718,7 +713,6 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Find checkpoints and last checkpoint number\n",
|
||||
"from os import path\n",
|
||||
"checkpoint_files = [\n",
|
||||
" os.path.basename(file) for file in training_artifacts_ds.to_path() \\\n",
|
||||
" if os.path.basename(file).startswith('checkpoint-') and \\\n",
|
||||
@@ -783,7 +777,6 @@
|
||||
"# 1. Use a custom docker file with proper instructions to install xvfb, ffmpeg, python-opengl\n",
|
||||
"# and other dependencies.\n",
|
||||
"# Note: Even when the rendering is off pyhton-opengl is needed.\n",
|
||||
"# TODO: Add these instructions to default reinforcement learning base image and drop this docker file.\n",
|
||||
"\n",
|
||||
"with open(\"files/docker/Dockerfile\", \"r\") as f:\n",
|
||||
" dockerfile=f.read()\n",
|
||||
@@ -852,8 +845,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"\n",
|
||||
"RunDetails(rollout_run).show()"
|
||||
]
|
||||
},
|
||||
@@ -890,8 +881,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Dataset\n",
|
||||
"\n",
|
||||
"# Get a handle to child run\n",
|
||||
"child_runs = list(rollout_run.get_children())\n",
|
||||
"print('Number of child runs:', len(child_runs))\n",
|
||||
@@ -971,9 +960,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from os import path\n",
|
||||
"from distutils import dir_util\n",
|
||||
"\n",
|
||||
"# To archive the created experiment:\n",
|
||||
"#exp.archive()\n",
|
||||
"\n",
|
||||
|
||||
@@ -4,4 +4,3 @@ dependencies:
|
||||
- azureml-sdk
|
||||
- azureml-contrib-reinforcementlearning
|
||||
- azureml-widgets
|
||||
- azureml-dataprep
|
||||
|
||||
@@ -15,3 +15,9 @@ def on_train_result(info):
|
||||
run.log(
|
||||
name='episodes_total',
|
||||
value=info["result"]["episodes_total"])
|
||||
run.log(
|
||||
name='perf_cpu_percent',
|
||||
value=info["result"]["perf"]["cpu_util_percent"])
|
||||
run.log(
|
||||
name='perf_memory_percent',
|
||||
value=info["result"]["perf"]["ram_util_percent"])
|
||||
|
||||
@@ -1477,7 +1477,7 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
* Pause the old server. It's vital that we do this, otherwise it will
|
||||
* respond to the quit disconnect package straight away and kill the server
|
||||
* thread, which means there will be no server to respond to the loadWorld
|
||||
* code. (This was the cause of the infamous "Holder Lookups" hang.)
|
||||
* code.
|
||||
*/
|
||||
public class PauseOldServerEpisode extends ConfigAwareStateEpisode
|
||||
{
|
||||
@@ -1505,7 +1505,6 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
{
|
||||
if (!killPublicFlag(Minecraft.getMinecraft().getIntegratedServer()))
|
||||
{
|
||||
// Can't pause, don't want to risk the hang - so bail.
|
||||
episodeHasCompletedWithErrors(ClientState.ERROR_CANNOT_CREATE_WORLD, "Can not pause the old server since it's open to LAN; no way to safely create new world.");
|
||||
}
|
||||
}
|
||||
@@ -1555,8 +1554,7 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
if (inAbortState())
|
||||
episodeHasCompleted(ClientState.MISSION_ABORTED);
|
||||
|
||||
// We need to make sure that both the client and server have paused,
|
||||
// otherwise we are still susceptible to the "Holder Lookups" hang.
|
||||
// We need to make sure that both the client and server have paused.
|
||||
|
||||
// Since the server sets its pause state in response to the client's pause state,
|
||||
// and it only performs this check once, at the top of its tick method,
|
||||
@@ -1615,7 +1613,7 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
// If the Minecraft server isn't paused at this point,
|
||||
// then the following line will cause the server thread to exit...
|
||||
Minecraft.getMinecraft().world.sendQuittingDisconnectingPacket();
|
||||
// ...in which case the next line will hang.
|
||||
// ...in which case the next line will block.
|
||||
Minecraft.getMinecraft().loadWorld((WorldClient) null);
|
||||
// Must display the GUI or Minecraft will attempt to access a non-existent player in the client tick.
|
||||
Minecraft.getMinecraft().displayGuiScreen(new GuiMainMenu());
|
||||
@@ -2135,7 +2133,6 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
envServer.observation(data);
|
||||
}
|
||||
} else {
|
||||
// Bung the whole shebang off via TCP:
|
||||
if (this.observationSocket.sendTCPString(data)) {
|
||||
this.failedTCPObservationSendCount = 0;
|
||||
} else {
|
||||
|
||||
@@ -1477,7 +1477,7 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
* Pause the old server. It's vital that we do this, otherwise it will
|
||||
* respond to the quit disconnect package straight away and kill the server
|
||||
* thread, which means there will be no server to respond to the loadWorld
|
||||
* code. (This was the cause of the infamous "Holder Lookups" hang.)
|
||||
* code.
|
||||
*/
|
||||
public class PauseOldServerEpisode extends ConfigAwareStateEpisode
|
||||
{
|
||||
@@ -1505,7 +1505,6 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
{
|
||||
if (!killPublicFlag(Minecraft.getMinecraft().getIntegratedServer()))
|
||||
{
|
||||
// Can't pause, don't want to risk the hang - so bail.
|
||||
episodeHasCompletedWithErrors(ClientState.ERROR_CANNOT_CREATE_WORLD, "Can not pause the old server since it's open to LAN; no way to safely create new world.");
|
||||
}
|
||||
}
|
||||
@@ -1555,8 +1554,7 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
if (inAbortState())
|
||||
episodeHasCompleted(ClientState.MISSION_ABORTED);
|
||||
|
||||
// We need to make sure that both the client and server have paused,
|
||||
// otherwise we are still susceptible to the "Holder Lookups" hang.
|
||||
// We need to make sure that both the client and server have paused.
|
||||
|
||||
// Since the server sets its pause state in response to the client's pause state,
|
||||
// and it only performs this check once, at the top of its tick method,
|
||||
@@ -1615,7 +1613,7 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
// If the Minecraft server isn't paused at this point,
|
||||
// then the following line will cause the server thread to exit...
|
||||
Minecraft.getMinecraft().world.sendQuittingDisconnectingPacket();
|
||||
// ...in which case the next line will hang.
|
||||
// ...in which case the next line will block.
|
||||
Minecraft.getMinecraft().loadWorld((WorldClient) null);
|
||||
// Must display the GUI or Minecraft will attempt to access a non-existent player in the client tick.
|
||||
Minecraft.getMinecraft().displayGuiScreen(new GuiMainMenu());
|
||||
@@ -2135,7 +2133,6 @@ public class ClientStateMachine extends StateMachine implements IMalmoMessageLis
|
||||
envServer.observation(data);
|
||||
}
|
||||
} else {
|
||||
// Bung the whole shebang off via TCP:
|
||||
if (this.observationSocket.sendTCPString(data)) {
|
||||
this.failedTCPObservationSendCount = 0;
|
||||
} else {
|
||||
|
||||
@@ -0,0 +1,60 @@
|
||||
FROM mcr.microsoft.com/azureml/base:openmpi3.1.2-ubuntu18.04
|
||||
|
||||
# Install some basic utilities
|
||||
RUN apt-get update && apt-get install -y \
|
||||
curl \
|
||||
ca-certificates \
|
||||
sudo \
|
||||
cpio \
|
||||
git \
|
||||
bzip2 \
|
||||
libx11-6 \
|
||||
tmux \
|
||||
htop \
|
||||
gcc \
|
||||
xvfb \
|
||||
python-opengl \
|
||||
x11-xserver-utils \
|
||||
ffmpeg \
|
||||
mesa-utils \
|
||||
nano \
|
||||
vim \
|
||||
rsync \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install python 3.7
|
||||
RUN conda install python==3.7
|
||||
|
||||
# Create a working directory
|
||||
RUN mkdir /app
|
||||
WORKDIR /app
|
||||
|
||||
# Install required pip packages
|
||||
RUN pip install --upgrade pip setuptools && pip install --upgrade \
|
||||
pandas \
|
||||
matplotlib \
|
||||
psutil \
|
||||
numpy \
|
||||
scipy \
|
||||
gym \
|
||||
azureml-defaults \
|
||||
tensorboardX \
|
||||
tensorflow==1.15 \
|
||||
tensorflow-probability==0.8.0 \
|
||||
onnxruntime \
|
||||
tf2onnx \
|
||||
cloudpickle==1.2.0 \
|
||||
tabulate \
|
||||
dm_tree \
|
||||
lz4 \
|
||||
opencv-python \
|
||||
ray==0.8.3 \
|
||||
ray[rllib]==0.8.3 \
|
||||
ray[tune]==0.8.3
|
||||
|
||||
# Install particle
|
||||
RUN git clone https://github.com/openai/multiagent-particle-envs.git
|
||||
COPY patch_files/* multiagent-particle-envs/multiagent/
|
||||
RUN cd multiagent-particle-envs && \
|
||||
pip install -e . && \
|
||||
pip install --upgrade pyglet==1.3.2
|
||||
@@ -0,0 +1,70 @@
|
||||
# MIT License
|
||||
|
||||
# Copyright (c) 2018 OpenAI
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
import numpy as np
|
||||
import gym
|
||||
|
||||
|
||||
class MultiDiscrete(gym.Space):
|
||||
"""
|
||||
- The multi-discrete action space consists of a series of discrete action spaces with different
|
||||
parameters
|
||||
- It can be adapted to both a Discrete action space or a continuous (Box) action space
|
||||
- It is useful to represent game controllers or keyboards where each key can be represented as
|
||||
a discrete action space
|
||||
- It is parametrized by passing an array of arrays containing [min, max] for each discrete action
|
||||
space where the discrete action space can take any integers from `min` to `max` (both inclusive)
|
||||
Note: A value of 0 always need to represent the NOOP action.
|
||||
e.g. Nintendo Game Controller
|
||||
- Can be conceptualized as 3 discrete action spaces:
|
||||
1) Arrow Keys: Discrete 5 - NOOP[0], UP[1], RIGHT[2], DOWN[3], LEFT[4] - params: min: 0, max: 4
|
||||
2) Button A: Discrete 2 - NOOP[0], Pressed[1] - params: min: 0, max: 1
|
||||
3) Button B: Discrete 2 - NOOP[0], Pressed[1] - params: min: 0, max: 1
|
||||
- Can be initialized as
|
||||
MultiDiscrete([ [0,4], [0,1], [0,1] ])
|
||||
"""
|
||||
def __init__(self, array_of_param_array):
|
||||
self.low = np.array([x[0] for x in array_of_param_array])
|
||||
self.high = np.array([x[1] for x in array_of_param_array])
|
||||
self.num_discrete_space = self.low.shape[0]
|
||||
|
||||
def sample(self):
|
||||
""" Returns a array with one sample from each discrete action space """
|
||||
# For each row: round(random .* (max - min) + min, 0)
|
||||
# random_array = prng.np_random.rand(self.num_discrete_space)
|
||||
random_array = np.random.RandomState().rand(self.num_discrete_space)
|
||||
return [int(x) for x in np.floor(np.multiply((self.high - self.low + 1.), random_array) + self.low)]
|
||||
|
||||
def contains(self, x):
|
||||
return len(x) == self.num_discrete_space \
|
||||
and (np.array(x) >= self.low).all() \
|
||||
and (np.array(x) <= self.high).all()
|
||||
|
||||
@property
|
||||
def shape(self):
|
||||
return self.num_discrete_space
|
||||
|
||||
def __repr__(self):
|
||||
return "MultiDiscrete" + str(self.num_discrete_space)
|
||||
|
||||
def __eq__(self, other):
|
||||
return np.array_equal(self.low, other.low) and np.array_equal(self.high, other.high)
|
||||
@@ -0,0 +1,413 @@
|
||||
# MIT License
|
||||
|
||||
# Copyright (c) 2018 OpenAI
|
||||
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
|
||||
# The above copyright notice and this permission notice shall be included in all
|
||||
# copies or substantial portions of the Software.
|
||||
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
"""
|
||||
2D rendering framework
|
||||
"""
|
||||
from __future__ import division
|
||||
import os
|
||||
import six
|
||||
import sys
|
||||
from gym import error
|
||||
import math
|
||||
import numpy as np
|
||||
import pyglet
|
||||
|
||||
from pyglet.gl import glEnable, glHint, glLineWidth, glBlendFunc, glClearColor, glPushMatrix, \
|
||||
glTranslatef, glRotatef, glScalef, glPopMatrix, glColor4f, glBegin, glVertex3f, glEnd, glLineStipple, \
|
||||
glDisable, glVertex2f, GL_BLEND, GL_LINE_SMOOTH, GL_LINE_SMOOTH_HINT, GL_NICEST, GL_SRC_ALPHA, \
|
||||
GL_ONE_MINUS_SRC_ALPHA, GL_LINE_STIPPLE, GL_POINTS, GL_QUADS, GL_TRIANGLES, GL_POLYGON, GL_LINE_LOOP, \
|
||||
GL_LINE_STRIP, GL_LINES
|
||||
|
||||
|
||||
if "Apple" in sys.version:
|
||||
if 'DYLD_FALLBACK_LIBRARY_PATH' in os.environ:
|
||||
os.environ['DYLD_FALLBACK_LIBRARY_PATH'] += ':/usr/lib'
|
||||
# (JDS 2016/04/15): avoid bug on Anaconda 2.3.0 / Yosemite
|
||||
|
||||
|
||||
RAD2DEG = 57.29577951308232
|
||||
|
||||
|
||||
def get_display(spec):
|
||||
"""Convert a display specification (such as :0) into an actual Display
|
||||
object.
|
||||
|
||||
Pyglet only supports multiple Displays on Linux.
|
||||
"""
|
||||
if spec is None:
|
||||
return None
|
||||
elif isinstance(spec, six.string_types):
|
||||
return pyglet.canvas.Display(spec)
|
||||
else:
|
||||
raise error.Error('Invalid display specification: {}. (Must be a string like :0 or None.)'.format(spec))
|
||||
|
||||
|
||||
class Viewer(object):
|
||||
def __init__(self, width, height, display=None):
|
||||
display = get_display(display)
|
||||
|
||||
self.width = width
|
||||
self.height = height
|
||||
|
||||
self.window = pyglet.window.Window(width=width, height=height, display=display)
|
||||
self.window.on_close = self.window_closed_by_user
|
||||
self.geoms = []
|
||||
self.onetime_geoms = []
|
||||
self.transform = Transform()
|
||||
|
||||
glEnable(GL_BLEND)
|
||||
# glEnable(GL_MULTISAMPLE)
|
||||
glEnable(GL_LINE_SMOOTH)
|
||||
# glHint(GL_LINE_SMOOTH_HINT, GL_DONT_CARE)
|
||||
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST)
|
||||
glLineWidth(2.0)
|
||||
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
|
||||
|
||||
def close(self):
|
||||
self.window.close()
|
||||
|
||||
def window_closed_by_user(self):
|
||||
self.close()
|
||||
|
||||
def set_bounds(self, left, right, bottom, top):
|
||||
assert right > left and top > bottom
|
||||
scalex = self.width / (right - left)
|
||||
scaley = self.height / (top - bottom)
|
||||
self.transform = Transform(
|
||||
translation=(-left * scalex, -bottom * scaley),
|
||||
scale=(scalex, scaley))
|
||||
|
||||
def add_geom(self, geom):
|
||||
self.geoms.append(geom)
|
||||
|
||||
def add_onetime(self, geom):
|
||||
self.onetime_geoms.append(geom)
|
||||
|
||||
def render(self, return_rgb_array=False):
|
||||
glClearColor(1, 1, 1, 1)
|
||||
self.window.clear()
|
||||
self.window.switch_to()
|
||||
self.window.dispatch_events()
|
||||
self.transform.enable()
|
||||
for geom in self.geoms:
|
||||
geom.render()
|
||||
for geom in self.onetime_geoms:
|
||||
geom.render()
|
||||
self.transform.disable()
|
||||
arr = None
|
||||
if return_rgb_array:
|
||||
buffer = pyglet.image.get_buffer_manager().get_color_buffer()
|
||||
image_data = buffer.get_image_data()
|
||||
arr = np.fromstring(image_data.data, dtype=np.uint8, sep='')
|
||||
# In https://github.com/openai/gym-http-api/issues/2, we
|
||||
# discovered that someone using Xmonad on Arch was having
|
||||
# a window of size 598 x 398, though a 600 x 400 window
|
||||
# was requested. (Guess Xmonad was preserving a pixel for
|
||||
# the boundary.) So we use the buffer height/width rather
|
||||
# than the requested one.
|
||||
arr = arr.reshape(buffer.height, buffer.width, 4)
|
||||
arr = arr[::-1, :, 0:3]
|
||||
self.window.flip()
|
||||
self.onetime_geoms = []
|
||||
return arr
|
||||
|
||||
# Convenience
|
||||
def draw_circle(self, radius=10, res=30, filled=True, **attrs):
|
||||
geom = make_circle(radius=radius, res=res, filled=filled)
|
||||
_add_attrs(geom, attrs)
|
||||
self.add_onetime(geom)
|
||||
return geom
|
||||
|
||||
def draw_polygon(self, v, filled=True, **attrs):
|
||||
geom = make_polygon(v=v, filled=filled)
|
||||
_add_attrs(geom, attrs)
|
||||
self.add_onetime(geom)
|
||||
return geom
|
||||
|
||||
def draw_polyline(self, v, **attrs):
|
||||
geom = make_polyline(v=v)
|
||||
_add_attrs(geom, attrs)
|
||||
self.add_onetime(geom)
|
||||
return geom
|
||||
|
||||
def draw_line(self, start, end, **attrs):
|
||||
geom = Line(start, end)
|
||||
_add_attrs(geom, attrs)
|
||||
self.add_onetime(geom)
|
||||
return geom
|
||||
|
||||
def get_array(self):
|
||||
self.window.flip()
|
||||
image_data = pyglet.image.get_buffer_manager().get_color_buffer().get_image_data()
|
||||
self.window.flip()
|
||||
arr = np.fromstring(image_data.data, dtype=np.uint8, sep='')
|
||||
arr = arr.reshape(self.height, self.width, 4)
|
||||
return arr[::-1, :, 0:3]
|
||||
|
||||
|
||||
def _add_attrs(geom, attrs):
|
||||
if "color" in attrs:
|
||||
geom.set_color(*attrs["color"])
|
||||
if "linewidth" in attrs:
|
||||
geom.set_linewidth(attrs["linewidth"])
|
||||
|
||||
|
||||
class Geom(object):
|
||||
def __init__(self):
|
||||
self._color = Color((0, 0, 0, 1.0))
|
||||
self.attrs = [self._color]
|
||||
|
||||
def render(self):
|
||||
for attr in reversed(self.attrs):
|
||||
attr.enable()
|
||||
self.render1()
|
||||
for attr in self.attrs:
|
||||
attr.disable()
|
||||
|
||||
def render1(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def add_attr(self, attr):
|
||||
self.attrs.append(attr)
|
||||
|
||||
def set_color(self, r, g, b, alpha=1):
|
||||
self._color.vec4 = (r, g, b, alpha)
|
||||
|
||||
|
||||
class Attr(object):
|
||||
def enable(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def disable(self):
|
||||
pass
|
||||
|
||||
|
||||
class Transform(Attr):
|
||||
def __init__(self, translation=(0.0, 0.0), rotation=0.0, scale=(1, 1)):
|
||||
self.set_translation(*translation)
|
||||
self.set_rotation(rotation)
|
||||
self.set_scale(*scale)
|
||||
|
||||
def enable(self):
|
||||
glPushMatrix()
|
||||
glTranslatef(self.translation[0], self.translation[1], 0) # translate to GL loc ppint
|
||||
glRotatef(RAD2DEG * self.rotation, 0, 0, 1.0)
|
||||
glScalef(self.scale[0], self.scale[1], 1)
|
||||
|
||||
def disable(self):
|
||||
glPopMatrix()
|
||||
|
||||
def set_translation(self, newx, newy):
|
||||
self.translation = (float(newx), float(newy))
|
||||
|
||||
def set_rotation(self, new):
|
||||
self.rotation = float(new)
|
||||
|
||||
def set_scale(self, newx, newy):
|
||||
self.scale = (float(newx), float(newy))
|
||||
|
||||
|
||||
class Color(Attr):
|
||||
def __init__(self, vec4):
|
||||
self.vec4 = vec4
|
||||
|
||||
def enable(self):
|
||||
glColor4f(*self.vec4)
|
||||
|
||||
|
||||
class LineStyle(Attr):
|
||||
def __init__(self, style):
|
||||
self.style = style
|
||||
|
||||
def enable(self):
|
||||
glEnable(GL_LINE_STIPPLE)
|
||||
glLineStipple(1, self.style)
|
||||
|
||||
def disable(self):
|
||||
glDisable(GL_LINE_STIPPLE)
|
||||
|
||||
|
||||
class LineWidth(Attr):
|
||||
def __init__(self, stroke):
|
||||
self.stroke = stroke
|
||||
|
||||
def enable(self):
|
||||
glLineWidth(self.stroke)
|
||||
|
||||
|
||||
class Point(Geom):
|
||||
def __init__(self):
|
||||
Geom.__init__(self)
|
||||
|
||||
def render1(self):
|
||||
glBegin(GL_POINTS) # draw point
|
||||
glVertex3f(0.0, 0.0, 0.0)
|
||||
glEnd()
|
||||
|
||||
|
||||
class FilledPolygon(Geom):
|
||||
def __init__(self, v):
|
||||
Geom.__init__(self)
|
||||
self.v = v
|
||||
|
||||
def render1(self):
|
||||
if len(self.v) == 4:
|
||||
glBegin(GL_QUADS)
|
||||
elif len(self.v) > 4:
|
||||
glBegin(GL_POLYGON)
|
||||
else:
|
||||
glBegin(GL_TRIANGLES)
|
||||
for p in self.v:
|
||||
glVertex3f(p[0], p[1], 0) # draw each vertex
|
||||
glEnd()
|
||||
|
||||
color = (
|
||||
self._color.vec4[0] * 0.5,
|
||||
self._color.vec4[1] * 0.5,
|
||||
self._color.vec4[2] * 0.5,
|
||||
self._color.vec4[3] * 0.5)
|
||||
glColor4f(*color)
|
||||
glBegin(GL_LINE_LOOP)
|
||||
for p in self.v:
|
||||
glVertex3f(p[0], p[1], 0) # draw each vertex
|
||||
glEnd()
|
||||
|
||||
|
||||
def make_circle(radius=10, res=30, filled=True):
|
||||
points = []
|
||||
for i in range(res):
|
||||
ang = 2 * math.pi * i / res
|
||||
points.append((math.cos(ang) * radius, math.sin(ang) * radius))
|
||||
if filled:
|
||||
return FilledPolygon(points)
|
||||
else:
|
||||
return PolyLine(points, True)
|
||||
|
||||
|
||||
def make_polygon(v, filled=True):
|
||||
if filled:
|
||||
return FilledPolygon(v)
|
||||
else:
|
||||
return PolyLine(v, True)
|
||||
|
||||
|
||||
def make_polyline(v):
|
||||
return PolyLine(v, False)
|
||||
|
||||
|
||||
def make_capsule(length, width):
|
||||
l, r, t, b = 0, length, width / 2, -width / 2
|
||||
box = make_polygon([(l, b), (l, t), (r, t), (r, b)])
|
||||
circ0 = make_circle(width / 2)
|
||||
circ1 = make_circle(width / 2)
|
||||
circ1.add_attr(Transform(translation=(length, 0)))
|
||||
geom = Compound([box, circ0, circ1])
|
||||
return geom
|
||||
|
||||
|
||||
class Compound(Geom):
|
||||
def __init__(self, gs):
|
||||
Geom.__init__(self)
|
||||
self.gs = gs
|
||||
for g in self.gs:
|
||||
g.attrs = [a for a in g.attrs if not isinstance(a, Color)]
|
||||
|
||||
def render1(self):
|
||||
for g in self.gs:
|
||||
g.render()
|
||||
|
||||
|
||||
class PolyLine(Geom):
|
||||
def __init__(self, v, close):
|
||||
Geom.__init__(self)
|
||||
self.v = v
|
||||
self.close = close
|
||||
self.linewidth = LineWidth(1)
|
||||
self.add_attr(self.linewidth)
|
||||
|
||||
def render1(self):
|
||||
glBegin(GL_LINE_LOOP if self.close else GL_LINE_STRIP)
|
||||
for p in self.v:
|
||||
glVertex3f(p[0], p[1], 0) # draw each vertex
|
||||
glEnd()
|
||||
|
||||
def set_linewidth(self, x):
|
||||
self.linewidth.stroke = x
|
||||
|
||||
|
||||
class Line(Geom):
|
||||
def __init__(self, start=(0.0, 0.0), end=(0.0, 0.0)):
|
||||
Geom.__init__(self)
|
||||
self.start = start
|
||||
self.end = end
|
||||
self.linewidth = LineWidth(1)
|
||||
self.add_attr(self.linewidth)
|
||||
|
||||
def render1(self):
|
||||
glBegin(GL_LINES)
|
||||
glVertex2f(*self.start)
|
||||
glVertex2f(*self.end)
|
||||
glEnd()
|
||||
|
||||
|
||||
class Image(Geom):
|
||||
def __init__(self, fname, width, height):
|
||||
Geom.__init__(self)
|
||||
self.width = width
|
||||
self.height = height
|
||||
img = pyglet.image.load(fname)
|
||||
self.img = img
|
||||
self.flip = False
|
||||
|
||||
def render1(self):
|
||||
self.img.blit(-self.width / 2, -self.height / 2, width=self.width, height=self.height)
|
||||
|
||||
|
||||
class SimpleImageViewer(object):
|
||||
def __init__(self, display=None):
|
||||
self.window = None
|
||||
self.isopen = False
|
||||
self.display = display
|
||||
|
||||
def imshow(self, arr):
|
||||
if self.window is None:
|
||||
height, width, channels = arr.shape
|
||||
self.window = pyglet.window.Window(width=width, height=height, display=self.display)
|
||||
self.width = width
|
||||
self.height = height
|
||||
self.isopen = True
|
||||
assert arr.shape == (self.height, self.width, 3), "You passed in an image with the wrong number shape"
|
||||
image = pyglet.image.ImageData(self.width, self.height, 'RGB', arr.tobytes(), pitch=self.width * -3)
|
||||
self.window.clear()
|
||||
self.window.switch_to()
|
||||
self.window.dispatch_events()
|
||||
image.blit(0, 0)
|
||||
self.window.flip()
|
||||
|
||||
def close(self):
|
||||
if self.isopen:
|
||||
self.window.close()
|
||||
self.isopen = False
|
||||
|
||||
def __del__(self):
|
||||
self.close()
|
||||
@@ -0,0 +1,124 @@
|
||||
import argparse
|
||||
import re
|
||||
import os
|
||||
|
||||
import ray
|
||||
from ray.tune import run_experiments
|
||||
from ray.tune.registry import register_trainable, register_env, get_trainable_cls
|
||||
import ray.rllib.contrib.maddpg.maddpg as maddpg
|
||||
|
||||
from rllib_multiagent_particle_env import env_creator
|
||||
from util import parse_args
|
||||
|
||||
|
||||
def setup_ray():
|
||||
ray.init(address='auto')
|
||||
|
||||
register_env('particle', env_creator)
|
||||
|
||||
|
||||
def gen_policy(args, env, id):
|
||||
use_local_critic = [
|
||||
args.adv_policy == 'ddpg' if id < args.num_adversaries else
|
||||
args.good_policy == 'ddpg' for id in range(env.num_agents)
|
||||
]
|
||||
return (
|
||||
None,
|
||||
env.observation_space_dict[id],
|
||||
env.action_space_dict[id],
|
||||
{
|
||||
'agent_id': id,
|
||||
'use_local_critic': use_local_critic[id],
|
||||
'obs_space_dict': env.observation_space_dict,
|
||||
'act_space_dict': env.action_space_dict,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def gen_policies(args, env_config):
|
||||
env = env_creator(env_config)
|
||||
return {'policy_%d' % i: gen_policy(args, env, i) for i in range(len(env.observation_space_dict))}
|
||||
|
||||
|
||||
def to_multiagent_config(policies):
|
||||
policy_ids = list(policies.keys())
|
||||
return {
|
||||
'policies': policies,
|
||||
'policy_mapping_fn': lambda index: policy_ids[index]
|
||||
}
|
||||
|
||||
|
||||
def train(args, env_config):
|
||||
def stop(trial_id, result):
|
||||
max_train_time = int(os.environ.get('AML_MAX_TRAIN_TIME_SECONDS', 2 * 60 * 60))
|
||||
|
||||
return result['episode_reward_mean'] >= args.final_reward \
|
||||
or result['time_total_s'] >= max_train_time
|
||||
|
||||
run_experiments({
|
||||
'MADDPG_RLLib': {
|
||||
'run': 'contrib/MADDPG',
|
||||
'env': 'particle',
|
||||
'stop': stop,
|
||||
# Uncomment to enable more frequent checkpoints:
|
||||
# 'checkpoint_freq': args.checkpoint_freq,
|
||||
'checkpoint_at_end': True,
|
||||
'local_dir': args.local_dir,
|
||||
'restore': args.restore,
|
||||
'config': {
|
||||
# === Log ===
|
||||
'log_level': 'ERROR',
|
||||
|
||||
# === Environment ===
|
||||
'env_config': env_config,
|
||||
'num_envs_per_worker': args.num_envs_per_worker,
|
||||
'horizon': args.max_episode_len,
|
||||
|
||||
# === Policy Config ===
|
||||
# --- Model ---
|
||||
'good_policy': args.good_policy,
|
||||
'adv_policy': args.adv_policy,
|
||||
'actor_hiddens': [args.num_units] * 2,
|
||||
'actor_hidden_activation': 'relu',
|
||||
'critic_hiddens': [args.num_units] * 2,
|
||||
'critic_hidden_activation': 'relu',
|
||||
'n_step': args.n_step,
|
||||
'gamma': args.gamma,
|
||||
|
||||
# --- Exploration ---
|
||||
'tau': 0.01,
|
||||
|
||||
# --- Replay buffer ---
|
||||
'buffer_size': int(1e6),
|
||||
|
||||
# --- Optimization ---
|
||||
'actor_lr': args.lr,
|
||||
'critic_lr': args.lr,
|
||||
'learning_starts': args.train_batch_size * args.max_episode_len,
|
||||
'sample_batch_size': args.sample_batch_size,
|
||||
'train_batch_size': args.train_batch_size,
|
||||
'batch_mode': 'truncate_episodes',
|
||||
|
||||
# --- Parallelism ---
|
||||
'num_workers': args.num_workers,
|
||||
'num_gpus': args.num_gpus,
|
||||
'num_gpus_per_worker': 0,
|
||||
|
||||
# === Multi-agent setting ===
|
||||
'multiagent': to_multiagent_config(gen_policies(args, env_config)),
|
||||
},
|
||||
},
|
||||
}, verbose=1)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
args = parse_args()
|
||||
setup_ray()
|
||||
|
||||
env_config = {
|
||||
'scenario_name': args.scenario,
|
||||
'horizon': args.max_episode_len,
|
||||
'video_frequency': args.checkpoint_freq,
|
||||
}
|
||||
|
||||
train(args, env_config)
|
||||
@@ -0,0 +1,113 @@
|
||||
# Some code taken from: https://github.com/wsjeon/maddpg-rllib/
|
||||
|
||||
import imp
|
||||
import os
|
||||
|
||||
import gym
|
||||
from gym import wrappers
|
||||
from ray import rllib
|
||||
|
||||
from multiagent.environment import MultiAgentEnv
|
||||
import multiagent.scenarios as scenarios
|
||||
|
||||
|
||||
CUSTOM_SCENARIOS = ['simple_switch']
|
||||
|
||||
|
||||
class ParticleEnvRenderWrapper(gym.Wrapper):
|
||||
def __init__(self, env, horizon):
|
||||
super().__init__(env)
|
||||
self.horizon = horizon
|
||||
|
||||
def reset(self):
|
||||
self._num_steps = 0
|
||||
|
||||
return self.env.reset()
|
||||
|
||||
def render(self, mode):
|
||||
if mode == 'human':
|
||||
self.env.render(mode=mode)
|
||||
else:
|
||||
return self.env.render(mode=mode)[0]
|
||||
|
||||
def step(self, actions):
|
||||
obs_list, rew_list, done_list, info_list = self.env.step(actions)
|
||||
|
||||
self._num_steps += 1
|
||||
done = (all(done_list) or self._num_steps >= self.horizon)
|
||||
|
||||
# Gym monitor expects reward to be an int. This is only used for its
|
||||
# stats reporter, which we're not interested in. To make video recording
|
||||
# work, we package the rewards in the info object and extract it below.
|
||||
return obs_list, 0, done, [rew_list, done_list, info_list]
|
||||
|
||||
|
||||
class RLlibMultiAgentParticleEnv(rllib.MultiAgentEnv):
|
||||
def __init__(self, scenario_name, horizon, monitor_enabled=False, video_frequency=500):
|
||||
self._env = _make_env(scenario_name, horizon, monitor_enabled, video_frequency)
|
||||
self.num_agents = self._env.n
|
||||
self.agent_ids = list(range(self.num_agents))
|
||||
|
||||
self.observation_space_dict = self._make_dict(self._env.observation_space)
|
||||
self.action_space_dict = self._make_dict(self._env.action_space)
|
||||
|
||||
def reset(self):
|
||||
obs_dict = self._make_dict(self._env.reset())
|
||||
return obs_dict
|
||||
|
||||
def step(self, action_dict):
|
||||
actions = list(action_dict.values())
|
||||
obs_list, _, _, infos = self._env.step(actions)
|
||||
rew_list, done_list, _ = infos
|
||||
|
||||
obs_dict = self._make_dict(obs_list)
|
||||
rew_dict = self._make_dict(rew_list)
|
||||
done_dict = self._make_dict(done_list)
|
||||
done_dict['__all__'] = all(done_list)
|
||||
info_dict = self._make_dict([{'done': done} for done in done_list])
|
||||
|
||||
return obs_dict, rew_dict, done_dict, info_dict
|
||||
|
||||
def render(self, mode='human'):
|
||||
self._env.render(mode=mode)
|
||||
|
||||
def _make_dict(self, values):
|
||||
return dict(zip(self.agent_ids, values))
|
||||
|
||||
|
||||
def _video_callable(video_frequency):
|
||||
def should_record_video(episode_id):
|
||||
if episode_id % video_frequency == 0:
|
||||
return True
|
||||
return False
|
||||
|
||||
return should_record_video
|
||||
|
||||
|
||||
def _make_env(scenario_name, horizon, monitor_enabled, video_frequency):
|
||||
if scenario_name in CUSTOM_SCENARIOS:
|
||||
# Scenario file must exist locally
|
||||
file_path = os.path.join(os.path.dirname(__file__), scenario_name + '.py')
|
||||
scenario = imp.load_source('', file_path).Scenario()
|
||||
else:
|
||||
scenario = scenarios.load(scenario_name + '.py').Scenario()
|
||||
|
||||
world = scenario.make_world()
|
||||
|
||||
env = MultiAgentEnv(world, scenario.reset_world, scenario.reward, scenario.observation)
|
||||
env.metadata['video.frames_per_second'] = 8
|
||||
|
||||
env = ParticleEnvRenderWrapper(env, horizon)
|
||||
|
||||
if not monitor_enabled:
|
||||
return env
|
||||
|
||||
return wrappers.Monitor(env, './logs/videos', resume=True, video_callable=_video_callable(video_frequency))
|
||||
|
||||
|
||||
def env_creator(config):
|
||||
monitor_enabled = False
|
||||
if hasattr(config, 'worker_index') and hasattr(config, 'vector_index'):
|
||||
monitor_enabled = (config.worker_index == 1 and config.vector_index == 0)
|
||||
|
||||
return RLlibMultiAgentParticleEnv(**config, monitor_enabled=monitor_enabled)
|
||||
@@ -0,0 +1,358 @@
|
||||
import numpy as np
|
||||
import random
|
||||
|
||||
from multiagent.core import World, Agent, Landmark
|
||||
from multiagent.scenario import BaseScenario
|
||||
|
||||
|
||||
class SwitchWorld(World):
|
||||
""" Extended World with hills and switches """
|
||||
def __init__(self, hills, switches):
|
||||
super().__init__()
|
||||
# add hills and switches
|
||||
self.hills = hills
|
||||
self.switches = switches
|
||||
self.landmarks.extend(self.hills)
|
||||
self.landmarks.extend(self.switches)
|
||||
|
||||
def step(self):
|
||||
|
||||
super().step()
|
||||
|
||||
# if all hills are activated, reset the switches and hills
|
||||
if all([hill.active for hill in self.hills]):
|
||||
self.reset_hills()
|
||||
self.reset_switches()
|
||||
else:
|
||||
# Update switches
|
||||
for switch in self.switches:
|
||||
switch.step(self)
|
||||
# Update hills
|
||||
for hill in self.hills:
|
||||
hill.step(self)
|
||||
|
||||
def reset_hills(self):
|
||||
possible_hill_positions = [np.array([-0.8, 0]), np.array([0, 0.8]), np.array([0.8, 0]), np.array([0, -0.8])]
|
||||
hill_positions = random.sample(possible_hill_positions, k=len(self.hills))
|
||||
for i, hill in enumerate(self.hills):
|
||||
hill.state.p_pos = hill_positions[i]
|
||||
hill.deactivate()
|
||||
|
||||
def reset_switches(self):
|
||||
possible_switch_positions = [
|
||||
np.array([-0.8, -0.8]),
|
||||
np.array([-0.8, 0.8]),
|
||||
np.array([0.8, -0.8]),
|
||||
np.array([0.8, 0.8])]
|
||||
switch_positions = random.sample(possible_switch_positions, k=len(self.switches))
|
||||
for i, switch in enumerate(self.switches):
|
||||
switch.state.p_pos = switch_positions[i]
|
||||
switch.deactivate()
|
||||
|
||||
|
||||
class Scenario(BaseScenario):
|
||||
def make_world(self):
|
||||
|
||||
# main configurations
|
||||
num_agents = 2
|
||||
num_hills = 2
|
||||
num_switches = 1
|
||||
self.max_episode_length = 100
|
||||
|
||||
# create hills (on edges)
|
||||
possible_hill_positions = [np.array([-0.8, 0]), np.array([0, 0.8]), np.array([0.8, 0]), np.array([0, -0.8])]
|
||||
hill_positions = random.sample(possible_hill_positions, k=num_hills)
|
||||
hills = [Hill(hill_positions[i]) for i in range(num_hills)]
|
||||
# create switches (in corners)
|
||||
possible_switch_positions = [
|
||||
np.array([-0.8, -0.8]),
|
||||
np.array([-0.8, 0.8]),
|
||||
np.array([0.8, -0.8]),
|
||||
np.array([0.8, 0.8])]
|
||||
switch_positions = random.sample(possible_switch_positions, k=num_switches)
|
||||
switches = [Switch(switch_positions[i]) for i in range(num_switches)]
|
||||
|
||||
# make world and set basic properties
|
||||
world = SwitchWorld(hills, switches)
|
||||
world.dim_c = 2
|
||||
world.collaborative = True
|
||||
|
||||
# add agents
|
||||
world.agents = [Agent() for i in range(num_agents)]
|
||||
for i, agent in enumerate(world.agents):
|
||||
agent.name = 'agent %d' % i
|
||||
agent.collide = True
|
||||
agent.silent = True
|
||||
agent.size = 0.1
|
||||
agent.accel = 5.0
|
||||
agent.max_speed = 5.0
|
||||
if i == 0:
|
||||
agent.color = np.array([0.35, 0.35, 0.85])
|
||||
else:
|
||||
agent.color = np.array([0.35, 0.85, 0.85])
|
||||
|
||||
# make initial conditions
|
||||
self.reset_world(world)
|
||||
|
||||
return world
|
||||
|
||||
def reset_world(self, world):
|
||||
# set random initial states
|
||||
for agent in world.agents:
|
||||
agent.state.p_pos = np.array([random.uniform(-1, +1) for _ in range(world.dim_p)])
|
||||
agent.state.p_vel = np.zeros(world.dim_p)
|
||||
agent.state.c = np.zeros(world.dim_c)
|
||||
# set hills randomly
|
||||
world.reset_hills()
|
||||
# set switches randomly
|
||||
world.reset_switches()
|
||||
|
||||
def is_collision(self, agent1, agent2):
|
||||
delta_pos = agent1.state.p_pos - agent2.state.p_pos
|
||||
dist = np.sqrt(np.sum(np.square(delta_pos)))
|
||||
dist_min = agent1.size + agent2.size
|
||||
return True if dist < dist_min else False
|
||||
|
||||
def reward(self, agent, world):
|
||||
# Agents are rewarded based on number of landmarks activated
|
||||
rew = 0
|
||||
if all([h.active for h in world.hills]):
|
||||
rew += 100
|
||||
else:
|
||||
# give bonus each time a hill is activated
|
||||
for hill in world.hills:
|
||||
if hill.activated_just_now:
|
||||
rew += 50
|
||||
# penalise timesteps where nothing is happening
|
||||
if rew == 0:
|
||||
rew -= 0.1
|
||||
# add collision penalty
|
||||
if agent.collide:
|
||||
for a in world.agents:
|
||||
# note: this also counts collision with "itself", so gives -1 at every timestep
|
||||
# would be good to tune the reward function and use (not a == agent) here
|
||||
if self.is_collision(a, agent):
|
||||
rew -= 1
|
||||
return rew
|
||||
|
||||
def observation(self, agent, world):
|
||||
# get positions of all entities in this agent's reference frame
|
||||
entity_pos = []
|
||||
for entity in world.landmarks: # world.entities:
|
||||
entity_pos.append(entity.state.p_pos - agent.state.p_pos)
|
||||
# entity colors
|
||||
entity_color = []
|
||||
for entity in world.landmarks: # world.entities:
|
||||
entity_color.append(entity.color)
|
||||
# communication of all other agents
|
||||
comm = []
|
||||
other_pos = []
|
||||
for other in world.agents:
|
||||
if other is agent:
|
||||
continue
|
||||
comm.append(other.state.c)
|
||||
other_pos.append(other.state.p_pos - agent.state.p_pos)
|
||||
return np.concatenate([agent.state.p_vel] + [agent.state.p_pos] + entity_pos + other_pos + comm)
|
||||
|
||||
|
||||
class Hill(Landmark):
|
||||
"""
|
||||
A hill that can be captured by an agent.
|
||||
To be captured, a team must occupy a hill for a fixed amount of time.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
pos=None,
|
||||
size=0.08,
|
||||
capture_time=2
|
||||
):
|
||||
|
||||
# Initialize Landmark super class
|
||||
super().__init__()
|
||||
self.movable = False
|
||||
self.collide = False
|
||||
self.state.p_pos = pos
|
||||
self.size = size
|
||||
|
||||
# Set static configurations
|
||||
self.capture_time = capture_time
|
||||
|
||||
# Initialize all hills to be inactive
|
||||
self.active = False
|
||||
self.color = np.array([0.5, 0.5, 0.5])
|
||||
self.capture_timer = 0
|
||||
|
||||
self.activated_just_now = False
|
||||
|
||||
def activate(self):
|
||||
self.active = True
|
||||
self.color = np.array([0.1, 0.1, 0.9])
|
||||
|
||||
def deactivate(self):
|
||||
self.active = False
|
||||
self.color = np.array([0.5, 0.5, 0.5])
|
||||
|
||||
def _is_occupied(self, agents):
|
||||
# a hill is occupied if an agent stands on it
|
||||
for agent in agents:
|
||||
dist = np.sqrt(np.sum(np.square(agent.state.p_pos - self.state.p_pos)))
|
||||
if dist < agent.size + self.size:
|
||||
return True
|
||||
return False
|
||||
|
||||
def step(self, world):
|
||||
|
||||
self.activated_just_now = False
|
||||
|
||||
# If hill isn't activated yet, check if an agent activates it
|
||||
# if (not self.active) and (world.switch.is_active()):
|
||||
if (not self.active):
|
||||
|
||||
# Check if an agent is on the hill and all switches are active
|
||||
if (self._is_occupied(world.agents)) and all([switch.active for switch in world.switches]):
|
||||
self.capture_timer += 1
|
||||
|
||||
# activate hill (this is irreversible)
|
||||
if self.capture_timer > self.capture_time:
|
||||
self.activate()
|
||||
self.activated_just_now = True
|
||||
|
||||
# Reset capture timer if hill is not occupied
|
||||
else:
|
||||
self.capture_timer = 0
|
||||
|
||||
|
||||
class Switch(Landmark):
|
||||
"""
|
||||
A switch that can be activated by an agent.
|
||||
The agent has to stay on the switch for it to be active.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
pos=None,
|
||||
size=0.03,
|
||||
):
|
||||
|
||||
# Initialize Landmark super class
|
||||
super().__init__()
|
||||
self.movable = False
|
||||
self.collide = False
|
||||
self.state.p_pos = pos
|
||||
self.size = size
|
||||
|
||||
# Initialize all hills to be inactive
|
||||
self.active = False
|
||||
self.color = np.array([0.8, 0.05, 0.3])
|
||||
self.capture_timer = 0
|
||||
|
||||
def activate(self):
|
||||
self.active = True
|
||||
self.color = np.array([0.1, 0.9, 0.4])
|
||||
|
||||
def deactivate(self):
|
||||
self.active = False
|
||||
self.color = np.array([0.8, 0.05, 0.3])
|
||||
|
||||
def _is_occupied(self, agents):
|
||||
# a switch is active if an agent stands on it
|
||||
for agent in agents:
|
||||
dist = np.sqrt(np.sum(np.square(agent.state.p_pos - self.state.p_pos)))
|
||||
if dist < agent.size + self.size:
|
||||
return True
|
||||
return False
|
||||
|
||||
def step(self, world):
|
||||
# check if an agent is on the switch and activate/deactive accordingly
|
||||
if self._is_occupied(world.agents):
|
||||
self.activate()
|
||||
else:
|
||||
self.deactivate()
|
||||
|
||||
|
||||
class SwitchExpertPolicy():
|
||||
"""
|
||||
Hand-coded expert policy for the simple switch environment.
|
||||
Types of possible experts:
|
||||
- always go to the switch
|
||||
- always go to the hills
|
||||
"""
|
||||
def __init__(self, dim_c, agent, world, expert_type=None, discrete_action_input=True):
|
||||
|
||||
self.dim_c = dim_c
|
||||
self.discrete_action_input = discrete_action_input
|
||||
# the agent we control and world we're in
|
||||
self.agent = agent
|
||||
self.world = world
|
||||
|
||||
if expert_type is None:
|
||||
self.expert_type = random.choice(['switch', 'hill'])
|
||||
else:
|
||||
self.expert_type = expert_type
|
||||
if self.expert_type == 'switch':
|
||||
self.target_switch = self.select_inital_target_switch()
|
||||
elif self.expert_type == 'hill':
|
||||
self.target_hill = self.select_inital_target_hill()
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
self.step_count = 0
|
||||
|
||||
def select_inital_target_switch(self):
|
||||
return random.choice(self.world.switches)
|
||||
|
||||
def select_inital_target_hill(self):
|
||||
return random.choice(self.world.hills)
|
||||
|
||||
def action(self):
|
||||
|
||||
# select a target!
|
||||
if self.expert_type == 'switch':
|
||||
# if agent is not already on a switch, choose target switch
|
||||
if not any([switch._is_occupied([self.agent]) for switch in self.world.switches]):
|
||||
# select a target switch if there's an inactive one
|
||||
inactive_switches = [switch for switch in self.world.switches if not switch.active]
|
||||
if len(inactive_switches) > 0 and (self.target_switch not in inactive_switches):
|
||||
self.target_switch = random.choice(inactive_switches)
|
||||
target = self.target_switch.state.p_pos
|
||||
elif self.expert_type == 'hill':
|
||||
# select a target hill if we haven't done so yet, or the current target switch is inactive
|
||||
inactive_hills = [hill for hill in self.world.hills if not hill.active]
|
||||
if len(inactive_hills) > 0 and (self.target_hill not in inactive_hills):
|
||||
self.target_hill = random.choice(inactive_hills)
|
||||
target = self.target_hill.state.p_pos
|
||||
|
||||
self.step_count += 1
|
||||
|
||||
impulse = np.clip(target - self.agent.state.p_pos, -self.agent.u_range, self.agent.u_range)
|
||||
|
||||
if self.discrete_action_input:
|
||||
u_idx = np.argmax(np.abs(impulse))
|
||||
if u_idx == 0 and impulse[u_idx] < 0:
|
||||
u = 1
|
||||
elif u_idx == 0 and impulse[u_idx] > 0:
|
||||
u = 2
|
||||
elif u_idx == 1 and impulse[u_idx] < 0:
|
||||
u = 3
|
||||
elif u_idx == 1 and impulse[u_idx] > 0:
|
||||
u = 4
|
||||
else:
|
||||
u = 0
|
||||
else:
|
||||
u = np.zeros(5)
|
||||
if (impulse[0] == impulse[1] == 0) \
|
||||
or (self.step_count < self.burn_in) \
|
||||
or (self.burn_step != 0 and self.step_count % self.burn_step != 0):
|
||||
u[0] = 0.1
|
||||
else:
|
||||
pass
|
||||
# u: noop (?), right, left, down, up
|
||||
if impulse[0] > 0: # x-direction (- left/right + )
|
||||
u[1] = impulse[0] # right
|
||||
elif impulse[0] < 0:
|
||||
u[2] = -impulse[0]
|
||||
if impulse[1] > 0: # y-direction (- up/down + )
|
||||
u[3] = impulse[1]
|
||||
elif impulse[1] < 0:
|
||||
u[4] = -impulse[1]
|
||||
|
||||
return u
|
||||
@@ -0,0 +1,82 @@
|
||||
import argparse
|
||||
import os
|
||||
import re
|
||||
|
||||
from rllib_multiagent_particle_env import CUSTOM_SCENARIOS
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser('MADDPG with OpenAI MPE')
|
||||
|
||||
# Environment
|
||||
parser.add_argument('--scenario', type=str, default='simple',
|
||||
choices=['simple', 'simple_speaker_listener',
|
||||
'simple_crypto', 'simple_push',
|
||||
'simple_tag', 'simple_spread', 'simple_adversary'
|
||||
] + CUSTOM_SCENARIOS,
|
||||
help='name of the scenario script')
|
||||
parser.add_argument('--max-episode-len', type=int, default=25,
|
||||
help='maximum episode length')
|
||||
parser.add_argument('--num-episodes', type=int, default=60000,
|
||||
help='number of episodes')
|
||||
parser.add_argument('--num-adversaries', type=int, default=0,
|
||||
help='number of adversaries')
|
||||
parser.add_argument('--good-policy', type=str, default='maddpg',
|
||||
help='policy for good agents')
|
||||
parser.add_argument('--adv-policy', type=str, default='maddpg',
|
||||
help='policy of adversaries')
|
||||
|
||||
# Core training parameters
|
||||
parser.add_argument('--lr', type=float, default=1e-2,
|
||||
help='learning rate for Adam optimizer')
|
||||
parser.add_argument('--gamma', type=float, default=0.95,
|
||||
help='discount factor')
|
||||
# NOTE: 1 iteration = sample_batch_size * num_workers timesteps * num_envs_per_worker
|
||||
parser.add_argument('--sample-batch-size', type=int, default=25,
|
||||
help='number of data points sampled /update /worker')
|
||||
parser.add_argument('--train-batch-size', type=int, default=1024,
|
||||
help='number of data points /update')
|
||||
parser.add_argument('--n-step', type=int, default=1,
|
||||
help='length of multistep value backup')
|
||||
parser.add_argument('--num-units', type=int, default=64,
|
||||
help='number of units in the mlp')
|
||||
parser.add_argument('--final-reward', type=int, default=-400,
|
||||
help='final reward after which to stop training')
|
||||
|
||||
# Checkpoint
|
||||
parser.add_argument('--checkpoint-freq', type=int, default=200,
|
||||
help='save model once every time this many iterations are completed')
|
||||
parser.add_argument('--local-dir', type=str, default='./logs',
|
||||
help='path to save checkpoints')
|
||||
parser.add_argument('--restore', type=str, default=None,
|
||||
help='directory in which training state and model are loaded')
|
||||
|
||||
# Parallelism
|
||||
parser.add_argument('--num-workers', type=int, default=1)
|
||||
parser.add_argument('--num-envs-per-worker', type=int, default=4)
|
||||
parser.add_argument('--num-gpus', type=int, default=0)
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def find_final_checkpoint(start_dir):
|
||||
def find(pattern, path):
|
||||
result = []
|
||||
for root, _, files in os.walk(path):
|
||||
for name in files:
|
||||
if pattern.match(name):
|
||||
result.append(os.path.join(root, name))
|
||||
return result
|
||||
|
||||
cp_pattern = re.compile('.*checkpoint-\\d+$')
|
||||
checkpoint_files = find(cp_pattern, start_dir)
|
||||
|
||||
checkpoint_numbers = []
|
||||
for file in checkpoint_files:
|
||||
checkpoint_numbers.append(int(file.split('-')[-1]))
|
||||
|
||||
final_checkpoint_number = max(checkpoint_numbers)
|
||||
|
||||
return next(
|
||||
checkpoint_file for checkpoint_file in checkpoint_files
|
||||
if checkpoint_file.endswith(str(final_checkpoint_number)))
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 350 KiB |
@@ -0,0 +1,526 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Reinforcement Learning in Azure Machine Learning - Training multiple agents on collaborative ParticleEnv tasks\n",
|
||||
"\n",
|
||||
"This tutorial will show you how to train policies in a multi-agent scenario.\n",
|
||||
"We use OpenAI Gym's [Particle environments](https://github.com/openai/multiagent-particle-envs),\n",
|
||||
"which model agents and landmarks in a two-dimensional world. Particle comes with\n",
|
||||
"several predefined scenarios, both competitive and collaborative, and with or without communication.\n",
|
||||
"\n",
|
||||
"For this tutorial, we pick a cooperative navigation scenario where N agents are in a world with N\n",
|
||||
"landmarks. The agents' goal is to cover all the landmarks without collisions,\n",
|
||||
"so agents must learn to avoid each other (social distancing!). The video below shows training\n",
|
||||
"results for N=3 agents/landmarks:\n",
|
||||
"\n",
|
||||
"<table style=\"width:50%\">\n",
|
||||
" <tr>\n",
|
||||
" <th style=\"text-align: center;\">\n",
|
||||
" <img src=\"./images/particle_simple_spread.gif\" alt=\"Particle video\" align=\"middle\" margin-left=\"auto\" margin-right=\"auto\"/>\n",
|
||||
" </th>\n",
|
||||
" </tr>\n",
|
||||
" <tr style=\"text-align: center;\">\n",
|
||||
" <th>Fig 1. Video of 3 agents covering 3 landmarks in a multiagent Particle scenario.</th>\n",
|
||||
" </tr>\n",
|
||||
"</table>\n",
|
||||
"\n",
|
||||
"The tutorial will cover the following steps:\n",
|
||||
"- Initializing Azure Machine Learning resources for training\n",
|
||||
"- Training policies in a multi-agent environment with Azure Machine Learning service\n",
|
||||
"- Monitoring training progress\n",
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"The user should have completed the Azure Machine Learning introductory tutorial. You will need to make sure that you have a valid subscription id, a resource group and a workspace. For detailed instructions see [Tutorial: Get started creating your first ML experiment](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup).\n",
|
||||
"\n",
|
||||
"Please ensure that you have a current version of IPython (>= 7.15) installed.\n",
|
||||
"\n",
|
||||
"While this is a standalone notebook, we highly recommend going over the introductory notebooks for RL first.\n",
|
||||
"- Getting started:\n",
|
||||
" - [RL using a compute instance with Azure Machine Learning](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/cartpole-on-compute-instance/cartpole_ci.ipynb)\n",
|
||||
" - [RL using Azure Machine Learning compute](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb)\n",
|
||||
"- [Scaling RL training runs with Azure Machine Learning](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb)\n",
|
||||
"\n",
|
||||
"Advanced users might also be interested in [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.ipynb) demonstrating how to train a Minecraft RL agent in Azure Machine Learning.\n",
|
||||
"\n",
|
||||
"## Initialize resources\n",
|
||||
"\n",
|
||||
"All required Azure Machine Learning service resources for this tutorial can be set up from Jupyter. This includes:\n",
|
||||
"\n",
|
||||
"- Connecting to your existing Azure Machine Learning workspace.\n",
|
||||
"- Creating an experiment to track runs.\n",
|
||||
"- Creating remote compute targets for [Ray](https://docs.ray.io/en/latest/index.html).\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Azure Machine Learning SDK\n",
|
||||
"\n",
|
||||
"Display the Azure Machine Learning SDK version."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"print('Azure Machine Learning SDK Version: ', azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Connect to workspace\n",
|
||||
"\n",
|
||||
"Get a reference to an existing Azure Machine Learning workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.location, ws.resource_group, sep=' | ')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create an experiment\n",
|
||||
"\n",
|
||||
"Create an experiment to track the runs in your workspace. A\n",
|
||||
"workspace can have multiple experiments and each experiment\n",
|
||||
"can be used to track multiple runs (see [documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py)\n",
|
||||
"for details)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Experiment\n",
|
||||
"\n",
|
||||
"exp = Experiment(workspace=ws, name='particle-multiagent')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create or attach an existing compute resource\n",
|
||||
"\n",
|
||||
"A compute target is a designated compute resource where you run your training script. For more information, see [What are compute targets in Azure Machine Learning service?](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target).\n",
|
||||
"\n",
|
||||
"#### CPU target for Ray head\n",
|
||||
"\n",
|
||||
"In the experiment setup for this tutorial, the Ray head node will\n",
|
||||
"run on a CPU node (D3 type). A maximum cluster size of 1 node is\n",
|
||||
"therefore sufficient. If you wish to run multiple experiments in\n",
|
||||
"parallel using the same CPU cluster, you may elect to increase this\n",
|
||||
"number. The cluster will automatically scale down to 0 nodes when\n",
|
||||
"no training jobs are scheduled (see min_nodes).\n",
|
||||
"\n",
|
||||
"The code below creates a compute cluster of D3 type nodes.\n",
|
||||
"If the cluster with the specified name is already in your workspace\n",
|
||||
"the code will skip the creation process.\n",
|
||||
"\n",
|
||||
"**Note: Creation of a compute resource can take several minutes**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
|
||||
"\n",
|
||||
"cpu_cluster_name = 'cpu-cl-d3'\n",
|
||||
"\n",
|
||||
"if cpu_cluster_name in ws.compute_targets:\n",
|
||||
" cpu_cluster = ws.compute_targets[cpu_cluster_name]\n",
|
||||
" if cpu_cluster and type(cpu_cluster) is AmlCompute:\n",
|
||||
" if cpu_cluster.provisioning_state == 'Succeeded':\n",
|
||||
" print('Found existing compute target for {}. Using it.'.format(cpu_cluster_name))\n",
|
||||
" else: \n",
|
||||
" raise Exception('Found existing compute target for {} '.format(cpu_cluster_name)\n",
|
||||
" + 'but it is in state {}'.format(cpu_cluster.provisioning_state))\n",
|
||||
"else:\n",
|
||||
" print('Creating a new compute target for {}...'.format(cpu_cluster_name))\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(\n",
|
||||
" vm_size='STANDARD_D3',\n",
|
||||
" min_nodes=0, \n",
|
||||
" max_nodes=1)\n",
|
||||
"\n",
|
||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, provisioning_config)\n",
|
||||
" cpu_cluster.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||
" \n",
|
||||
" print('Cluster created.')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Training the policies\n",
|
||||
"\n",
|
||||
"### Training environment\n",
|
||||
"\n",
|
||||
"This tutorial uses a custom docker image\n",
|
||||
"with the necessary software installed. The [Environment](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-environments)\n",
|
||||
"class stores the configuration for the training environment. The\n",
|
||||
"docker image is set via `env.docker.base_image`.\n",
|
||||
"`user_managed_dependencies` is set so that\n",
|
||||
"the preinstalled Python packages in the image are preserved.\n",
|
||||
"\n",
|
||||
"Note that since we want to capture videos of the training runs requiring a display, we set the interpreter_path such that the Python process is started via **xvfb-run**."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from azureml.core import Environment\n",
|
||||
" \n",
|
||||
"cpu_particle_env = Environment(name='particle-cpu')\n",
|
||||
"\n",
|
||||
"cpu_particle_env.docker.enabled = True\n",
|
||||
"cpu_particle_env.docker.base_image = 'akdmsft/particle-cpu'\n",
|
||||
"cpu_particle_env.python.interpreter_path = 'xvfb-run -s \"-screen 0 640x480x16 -ac +extension GLX +render\" python'\n",
|
||||
"\n",
|
||||
"max_train_time = os.environ.get('AML_MAX_TRAIN_TIME_SECONDS', 2 * 60 * 60)\n",
|
||||
"cpu_particle_env.environment_variables['AML_MAX_TRAIN_TIME_SECONDS'] = str(max_train_time)\n",
|
||||
"cpu_particle_env.python.user_managed_dependencies = True"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Training script\n",
|
||||
"\n",
|
||||
"This tutorial uses the multiagent algorithm [Multi-Agent Deep Deterministic Policy Gradient (MADDPG)](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=maddpg#multi-agent-deep-deterministic-policy-gradient-contrib-maddpg).\n",
|
||||
"For training policies in a multiagent scenario, Ray's RLlib also\n",
|
||||
"requires the `multiagent` configuration section to be specified. You\n",
|
||||
"can find more information in the [common parameters](https://docs.ray.io/en/latest/rllib-training.html?highlight=multiagent#common-parameters)\n",
|
||||
"documentation.\n",
|
||||
"\n",
|
||||
"For monitoring and understanding the training progress, one\n",
|
||||
"of the training environments is wrapped in a [Gym monitor](https://github.com/openai/gym/blob/master/gym/wrappers/monitor.py)\n",
|
||||
"which periodically captures videos - by default every 200 training\n",
|
||||
"iterations.\n",
|
||||
"\n",
|
||||
"The stopping criteria are set such that the training run is\n",
|
||||
"terminated after either a mean reward of -400 is observed, or\n",
|
||||
"training has run for over 2 hours.\n",
|
||||
"\n",
|
||||
"### Submitting a training run\n",
|
||||
"\n",
|
||||
"Below, you create the training run using a `ReinforcementLearningEstimator`\n",
|
||||
"object, which contains all the configuration parameters for this experiment:\n",
|
||||
"\n",
|
||||
"- `source_directory`: Contains the training script and helper files to be\n",
|
||||
" copied onto the node.\n",
|
||||
"- `entry_script`: The training script, described in more detail above.\n",
|
||||
"- `script_params`: The command line arguments to pass to the entry script.\n",
|
||||
"- `compute_target`: The compute target for training script execution.\n",
|
||||
"- `environment`: The Azure Machine Learning environment definition for the node running the training.\n",
|
||||
"- `max_run_duration_seconds`: The time after which to abort the run if it is still running.\n",
|
||||
"\n",
|
||||
"For more details, please take a look at the [online documentation](https://docs.microsoft.com/en-us/python/api/azureml-contrib-reinforcementlearning/?view=azure-ml-py)\n",
|
||||
"for Azure Machine Learning service's reinforcement learning offering.\n",
|
||||
"\n",
|
||||
"Note that you can use the same notebook and scripts to experiment with\n",
|
||||
"different Particle environments. You can find a list of supported\n",
|
||||
"environments [here](https://github.com/openai/multiagent-particle-envs/tree/master#list-of-environments).\n",
|
||||
"Simply change the `--scenario` parameter to a supported scenario.\n",
|
||||
"\n",
|
||||
"In order to get the best training results, you can also adjust the\n",
|
||||
"`--final-reward` parameter to determine when to stop training. A greater\n",
|
||||
"reward means longer running time, but improved results. By default,\n",
|
||||
"the final reward will be -400, which should show good progress after\n",
|
||||
"about one hour of run time.\n",
|
||||
"\n",
|
||||
"For this notebook, we use a single D3 nodes, giving us a total of 4 CPUs and\n",
|
||||
"0 GPUs. One CPU is used by the MADDPG trainer, and an additional CPU is\n",
|
||||
"consumed by the RLlib rollout worker. The other 2 CPUs are not used, though\n",
|
||||
"smaller node types will run out of memory for this task.\n",
|
||||
"\n",
|
||||
"Lastly, the RunDetails widget displays information about the submitted RL\n",
|
||||
"experiment, including a link to the Azure portal with more details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.contrib.train.rl import ReinforcementLearningEstimator\n",
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"\n",
|
||||
"estimator = ReinforcementLearningEstimator(\n",
|
||||
" source_directory='files',\n",
|
||||
" entry_script='particle_train.py',\n",
|
||||
" script_params={\n",
|
||||
" '--scenario': 'simple_spread',\n",
|
||||
" '--final-reward': -400\n",
|
||||
" },\n",
|
||||
" compute_target=cpu_cluster,\n",
|
||||
" environment=cpu_particle_env,\n",
|
||||
" max_run_duration_seconds=3 * 60 * 60\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"train_run = exp.submit(config=estimator)\n",
|
||||
"\n",
|
||||
"RunDetails(train_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# If you wish to cancel the run before it completes, uncomment and execute:\n",
|
||||
"#train_run.cancel()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Monitoring training progress\n",
|
||||
"\n",
|
||||
"### View the Tensorboard\n",
|
||||
"\n",
|
||||
"The Tensorboard can be displayed via the Azure Machine Learning\n",
|
||||
"service's [Tensorboard API](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-tensorboard).\n",
|
||||
"When running locally, please make sure to follow the instructions\n",
|
||||
"in the link and install required packages. Running this cell will output a URL for the Tensorboard.\n",
|
||||
"\n",
|
||||
"Note that the training script sets the log directory when\n",
|
||||
"starting RLlib via the local_dir parameter. ./logs will automatically\n",
|
||||
"appear in the downloadable files for a run. Since this script is\n",
|
||||
"executed on the Ray head node run, we need to get a reference to it\n",
|
||||
"as shown below.\n",
|
||||
"\n",
|
||||
"The Tensorboard API will continuously stream logs from the run.\n",
|
||||
"\n",
|
||||
"**Note: It may take a couple of minutes after the run is in \"Running\"\n",
|
||||
"state before Tensorboard files are available and the board will refresh automatically**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import time\n",
|
||||
"from azureml.tensorboard import Tensorboard\n",
|
||||
"\n",
|
||||
"head_run = None\n",
|
||||
"\n",
|
||||
"timeout = 60\n",
|
||||
"while timeout > 0 and head_run is None:\n",
|
||||
" timeout -= 1\n",
|
||||
" \n",
|
||||
" try:\n",
|
||||
" head_run = next(r for r in train_run.get_children() if r.id.endswith('head'))\n",
|
||||
" except StopIteration:\n",
|
||||
" time.sleep(1)\n",
|
||||
"\n",
|
||||
"tb = Tensorboard([head_run])\n",
|
||||
"tb.start()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### View training videos\n",
|
||||
"\n",
|
||||
"As mentioned above, we record videos of the agents interacting with the\n",
|
||||
"Particle world. These videos are often a crucial indicator for training\n",
|
||||
"success. The code below downloads the latest video as it becomes available\n",
|
||||
"and displays it in-line.\n",
|
||||
"\n",
|
||||
"Over time, the agents learn to cooperate and avoid collisions while\n",
|
||||
"traveling to all landmarks.\n",
|
||||
"\n",
|
||||
"**Note: It can take several minutes for a video to appear after the run\n",
|
||||
"was started.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import tempfile\n",
|
||||
"from azureml.core import Dataset\n",
|
||||
"from azureml.data.dataset_error_handling import DatasetValidationError\n",
|
||||
"\n",
|
||||
"from IPython.display import clear_output\n",
|
||||
"from IPython.core.display import display, Video\n",
|
||||
"\n",
|
||||
"datastore = ws.get_default_datastore()\n",
|
||||
"path_prefix = './tmp_videos'\n",
|
||||
"\n",
|
||||
"def download_latest_training_video(run, video_checkpoint_counter):\n",
|
||||
" run_artifacts_path = os.path.join('azureml', run.id)\n",
|
||||
" \n",
|
||||
" try:\n",
|
||||
" run_artifacts_ds = Dataset.File.from_files(datastore.path(os.path.join(run_artifacts_path, '**')))\n",
|
||||
" except DatasetValidationError as e:\n",
|
||||
" # This happens at the start of the run when there is no data available\n",
|
||||
" # in the run's artifacts\n",
|
||||
" return None, video_checkpoint_counter\n",
|
||||
" \n",
|
||||
" video_files = [file for file in run_artifacts_ds.to_path() if file.endswith('.mp4')]\n",
|
||||
" if len(video_files) == video_checkpoint_counter:\n",
|
||||
" return None, video_checkpoint_counter\n",
|
||||
" \n",
|
||||
" iteration_numbers = [int(vf[vf.rindex('video') + len('video') : vf.index('.mp4')]) for vf in video_files]\n",
|
||||
" latest_video = next(vf for vf in video_files if vf.endswith('{num}.mp4'.format(num=max(iteration_numbers))))\n",
|
||||
" latest_video = os.path.join(run_artifacts_path, os.path.normpath(latest_video[1:]))\n",
|
||||
" \n",
|
||||
" datastore.download(\n",
|
||||
" target_path=path_prefix,\n",
|
||||
" prefix=latest_video.replace('\\\\', '/'),\n",
|
||||
" show_progress=False)\n",
|
||||
" \n",
|
||||
" return os.path.join(path_prefix, latest_video), len(video_files)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def render_video(vf):\n",
|
||||
" clear_output(wait=True)\n",
|
||||
" display(Video(data=vf, embed=True, html_attributes='loop autoplay width=50%'))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import shutil\n",
|
||||
"\n",
|
||||
"terminal_statuses = ['Canceled', 'Completed', 'Failed']\n",
|
||||
"video_checkpoint_counter = 0\n",
|
||||
"\n",
|
||||
"while head_run.get_status() not in terminal_statuses:\n",
|
||||
" video_file, video_checkpoint_counter = download_latest_training_video(head_run, video_checkpoint_counter)\n",
|
||||
" if video_file is not None:\n",
|
||||
" render_video(video_file)\n",
|
||||
" \n",
|
||||
" print('Displaying video number {}'.format(video_checkpoint_counter))\n",
|
||||
" shutil.rmtree(path_prefix)\n",
|
||||
" \n",
|
||||
" # Interrupting the kernel can take up to 15 seconds\n",
|
||||
" # depending on when time.sleep started\n",
|
||||
" time.sleep(15)\n",
|
||||
" \n",
|
||||
"train_run.wait_for_completion()\n",
|
||||
"print('The training run has reached a terminal status.')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Cleaning up\n",
|
||||
"\n",
|
||||
"Below, you can find code snippets for your convenience to clean up any resources created as part of this tutorial you don't wish to retain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# to stop the Tensorboard, uncomment and run\n",
|
||||
"#tb.stop()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# to delete the cpu compute target, uncomment and run\n",
|
||||
"#cpu_cluster.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Next steps\n",
|
||||
"\n",
|
||||
"We would love to hear your feedback! Please let us know what you think of Reinforcement Learning in Azure Machine Learning and what features you are looking forward to."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "andress"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
},
|
||||
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00afLicensed under the MIT License.\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00af "
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
name: particle
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-contrib-reinforcementlearning
|
||||
- azureml-widgets
|
||||
- tensorboard
|
||||
- azureml-tensorboard
|
||||
- ipython
|
||||
@@ -58,7 +58,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Get Azure Machine Learning workspace\n",
|
||||
"Get a reference to an existing Azure Machine Learning workspace. Please make sure to change `STANDARD_NC6` and `STANDARD_D2_V2` to [the ones available in your region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=virtual-machines).\n"
|
||||
"Get a reference to an existing Azure Machine Learning workspace.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -2,4 +2,3 @@ name: devenv_setup
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azure-mgmt-network
|
||||
|
||||
@@ -100,7 +100,7 @@
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
"\n",
|
||||
"print(\"This notebook was created using SDK version 1.7.0, you are currently running version\", azureml.core.VERSION)"
|
||||
"print(\"This notebook was created using SDK version 1.10.0, you are currently running version\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -215,6 +215,24 @@
|
||||
"!more hello.py"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Submitted runs take a snapshot of the *source_directory* to use when executing. You can control which files are available to the run by using an *.amlignore* file."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile .amlignore\n",
|
||||
"# Exclude the outputs directory automatically created by our earlier runs.\n",
|
||||
"/outputs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -373,7 +391,7 @@
|
||||
"source": [
|
||||
"## Query properties and tags\n",
|
||||
"\n",
|
||||
"You can quary runs within an experiment that match specific properties and tags. "
|
||||
"You can query runs within an experiment that match specific properties and tags."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -579,6 +597,22 @@
|
||||
"name": "roastala"
|
||||
}
|
||||
],
|
||||
"category": "training",
|
||||
"compute": [
|
||||
"Local"
|
||||
],
|
||||
"datasets": [
|
||||
"None"
|
||||
],
|
||||
"deployment": [
|
||||
"None"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"framework": [
|
||||
"None"
|
||||
],
|
||||
"friendly_name": "Managing your training runs",
|
||||
"index_order": 2,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
@@ -596,26 +630,10 @@
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.5"
|
||||
},
|
||||
"friendly_name": "Managing your training runs",
|
||||
"exclude_from_index": false,
|
||||
"index_order": 2,
|
||||
"category": "training",
|
||||
"task": "Monitor and complete runs",
|
||||
"datasets": [
|
||||
"None"
|
||||
],
|
||||
"compute": [
|
||||
"Local"
|
||||
],
|
||||
"deployment": [
|
||||
"None"
|
||||
],
|
||||
"framework": [
|
||||
"None"
|
||||
],
|
||||
"tags": [
|
||||
"None"
|
||||
]
|
||||
],
|
||||
"task": "Monitor and complete runs"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
|
||||
@@ -92,7 +92,7 @@
|
||||
" # Specify the configuration for the new cluster\n",
|
||||
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_D2_V2\",\n",
|
||||
" min_nodes=0,\n",
|
||||
" max_nodes=1)\n",
|
||||
" max_nodes=2)\n",
|
||||
"\n",
|
||||
" # Create the cluster with the specified name and configuration\n",
|
||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
|
||||
|
||||
@@ -184,7 +184,7 @@
|
||||
"myenv = Environment(\"myenv\")\n",
|
||||
"\n",
|
||||
"myenv.docker.enabled = True\n",
|
||||
"myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
|
||||
"myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn', 'packaging'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -6,9 +6,14 @@ from sklearn.linear_model import Ridge
|
||||
from sklearn.metrics import mean_squared_error
|
||||
from sklearn.model_selection import train_test_split
|
||||
from azureml.core.run import Run
|
||||
from sklearn.externals import joblib
|
||||
import os
|
||||
import numpy as np
|
||||
from sklearn import __version__ as sklearnver
|
||||
from packaging.version import Version
|
||||
if Version(sklearnver) < Version("0.23.0"):
|
||||
from sklearn.externals import joblib
|
||||
else:
|
||||
import joblib
|
||||
|
||||
os.makedirs('./outputs', exist_ok=True)
|
||||
|
||||
|
||||
@@ -0,0 +1,406 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Train using Azure Machine Learning Compute Instance\n",
|
||||
"\n",
|
||||
"* Initialize Workspace\n",
|
||||
"* Introduction to ComputeInstance\n",
|
||||
"* Create an Experiment\n",
|
||||
"* Submit ComputeInstance run\n",
|
||||
"* Additional operations to perform on ComputeInstance"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"If you are using an Azure Machine Learning ComputeInstance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize Workspace\n",
|
||||
"\n",
|
||||
"Initialize a workspace object"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"create workspace"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Introduction to ComputeInstance\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Azure Machine Learning compute instance is a fully-managed cloud-based workstation optimized for your machine learning development environment. It is created **within your workspace region**.\n",
|
||||
"\n",
|
||||
"For more information on ComputeInstance, please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance)\n",
|
||||
"\n",
|
||||
"**Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create ComputeInstance\n",
|
||||
"First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since ComputeInstance is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D3_V2') is supported.\n",
|
||||
"\n",
|
||||
"You can also pass a different region to check availability and then re-create your workspace in that region through the [configuration notebook](../../../configuration.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"msdoc": "how-to-auto-train-remote.md",
|
||||
"name": "check_region"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, ComputeInstance\n",
|
||||
"\n",
|
||||
"ComputeInstance.supported_vmsizes(workspace = ws)\n",
|
||||
"# ComputeInstance.supported_vmsizes(workspace = ws, location='eastus')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"msdoc": "how-to-auto-train-remote.md",
|
||||
"name": "create_instance"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import datetime\n",
|
||||
"import time\n",
|
||||
"\n",
|
||||
"from azureml.core.compute import ComputeTarget, ComputeInstance\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# Choose a name for your instance\n",
|
||||
"# Compute instance name should be unique across the azure region\n",
|
||||
"compute_name = \"ci{}\".format(ws._workspace_id)[:10]\n",
|
||||
"\n",
|
||||
"# Verify that instance does not exist already\n",
|
||||
"try:\n",
|
||||
" instance = ComputeInstance(workspace=ws, name=compute_name)\n",
|
||||
" print('Found existing instance, use it.')\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" compute_config = ComputeInstance.provisioning_configuration(\n",
|
||||
" vm_size='STANDARD_D3_V2',\n",
|
||||
" ssh_public_access=False,\n",
|
||||
" # vnet_resourcegroup_name='<my-resource-group>',\n",
|
||||
" # vnet_name='<my-vnet-name>',\n",
|
||||
" # subnet_name='default',\n",
|
||||
" # admin_user_ssh_public_key='<my-sshkey>'\n",
|
||||
" )\n",
|
||||
" instance = ComputeInstance.create(ws, compute_name, compute_config)\n",
|
||||
" instance.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create An Experiment\n",
|
||||
"\n",
|
||||
"**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Experiment\n",
|
||||
"experiment_name = 'train-on-computeinstance'\n",
|
||||
"experiment = Experiment(workspace = ws, name = experiment_name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Submit ComputeInstance run\n",
|
||||
"The training script `train.py` is already created for you"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create environment\n",
|
||||
"\n",
|
||||
"Create an environment with scikit-learn installed."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"myenv = Environment(\"myenv\")\n",
|
||||
"myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Configure & Run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import ScriptRunConfig\n",
|
||||
"from azureml.core.runconfig import DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"src = ScriptRunConfig(source_directory='', script='train.py')\n",
|
||||
"\n",
|
||||
"# Set compute target to the one created in previous step\n",
|
||||
"src.run_config.target = instance\n",
|
||||
"\n",
|
||||
"# Set environment\n",
|
||||
"src.run_config.environment = myenv\n",
|
||||
" \n",
|
||||
"run = experiment.submit(config=src)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can use the get_active_runs() to get the currently running or queued jobs on the compute instance"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# wait for the run to reach Queued or Running state if it is in Preparing state\n",
|
||||
"status = run.get_status()\n",
|
||||
"while status not in ['Queued', 'Running', 'Completed', 'Failed', 'Canceled']:\n",
|
||||
" state = run.get_status()\n",
|
||||
" print('Run status: {}'.format(status))\n",
|
||||
" time.sleep(10)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# get active runs which are in Queued or Running state\n",
|
||||
"active_runs = instance.get_active_runs()\n",
|
||||
"for active_run in active_runs:\n",
|
||||
" print(active_run.run_id, ',', active_run.status)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run.wait_for_completion()\n",
|
||||
"print(run.get_metrics())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Additional operations to perform on ComputeInstance\n",
|
||||
"\n",
|
||||
"You can perform more operations on ComputeInstance such as get status, change the state or deleting the compute."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"msdoc": "how-to-auto-train-remote.md",
|
||||
"name": "get_status"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# get_status() gets the latest status of the ComputeInstance target\n",
|
||||
"instance.get_status()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"msdoc": "how-to-auto-train-remote.md",
|
||||
"name": "stop"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# stop() is used to stop the ComputeInstance\n",
|
||||
"# Stopping ComputeInstance will stop the billing meter and persist the state on the disk.\n",
|
||||
"# Available Quota will not be changed with this operation.\n",
|
||||
"instance.stop(wait_for_completion=True, show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"msdoc": "how-to-auto-train-remote.md",
|
||||
"name": "start"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# start() is used to start the ComputeInstance if it is in stopped state\n",
|
||||
"instance.start(wait_for_completion=True, show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# restart() is used to restart the ComputeInstance\n",
|
||||
"instance.restart(wait_for_completion=True, show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# delete() is used to delete the ComputeInstance target. Useful if you want to re-use the compute name \n",
|
||||
"# instance.delete(wait_for_completion=True, show_output=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "ramagott"
|
||||
}
|
||||
],
|
||||
"category": "training",
|
||||
"compute": [
|
||||
"Compute Instance"
|
||||
],
|
||||
"datasets": [
|
||||
"Diabetes"
|
||||
],
|
||||
"deployment": [
|
||||
"None"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"framework": [
|
||||
"None"
|
||||
],
|
||||
"friendly_name": "Train on Azure Machine Learning Compute Instance",
|
||||
"index_order": 1,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.7"
|
||||
},
|
||||
"tags": [
|
||||
"None"
|
||||
],
|
||||
"task": "Submit a run on Azure Machine Learning Compute Instance."
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -0,0 +1,6 @@
|
||||
name: train-on-computeinstance
|
||||
dependencies:
|
||||
- scikit-learn
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
@@ -0,0 +1,48 @@
|
||||
# Copyright (c) Microsoft. All rights reserved.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
from sklearn.datasets import load_diabetes
|
||||
from sklearn.linear_model import Ridge
|
||||
from sklearn.metrics import mean_squared_error
|
||||
from sklearn.model_selection import train_test_split
|
||||
from azureml.core.run import Run
|
||||
import os
|
||||
import numpy as np
|
||||
# sklearn.externals.joblib is removed in 0.23
|
||||
try:
|
||||
from sklearn.externals import joblib
|
||||
except ImportError:
|
||||
import joblib
|
||||
|
||||
os.makedirs('./outputs', exist_ok=True)
|
||||
|
||||
X, y = load_diabetes(return_X_y=True)
|
||||
|
||||
run = Run.get_context()
|
||||
|
||||
X_train, X_test, y_train, y_test = train_test_split(X, y,
|
||||
test_size=0.2,
|
||||
random_state=0)
|
||||
data = {"train": {"X": X_train, "y": y_train},
|
||||
"test": {"X": X_test, "y": y_test}}
|
||||
|
||||
# list of numbers from 0.0 to 1.0 with a 0.05 interval
|
||||
alphas = np.arange(0.0, 1.0, 0.05)
|
||||
|
||||
for alpha in alphas:
|
||||
# Use Ridge algorithm to create a regression model
|
||||
reg = Ridge(alpha=alpha)
|
||||
reg.fit(data["train"]["X"], data["train"]["y"])
|
||||
|
||||
preds = reg.predict(data["test"]["X"])
|
||||
mse = mean_squared_error(preds, data["test"]["y"])
|
||||
run.log('alpha', alpha)
|
||||
run.log('mse', mse)
|
||||
|
||||
model_file_name = 'ridge_{0:.2f}.pkl'.format(alpha)
|
||||
# save model in the outputs folder so it automatically get uploaded
|
||||
with open(model_file_name, "wb") as file:
|
||||
joblib.dump(value=reg, filename=os.path.join('./outputs/',
|
||||
model_file_name))
|
||||
|
||||
print('alpha is {0:.2f}, and mse is {1:0.2f}'.format(alpha, mse))
|
||||
@@ -6,10 +6,14 @@ from sklearn.linear_model import Ridge
|
||||
from sklearn.metrics import mean_squared_error
|
||||
from sklearn.model_selection import train_test_split
|
||||
from azureml.core.run import Run
|
||||
from sklearn.externals import joblib
|
||||
import os
|
||||
import numpy as np
|
||||
import mylib
|
||||
# sklearn.externals.joblib is removed in 0.23
|
||||
try:
|
||||
from sklearn.externals import joblib
|
||||
except ImportError:
|
||||
import joblib
|
||||
|
||||
os.makedirs('./outputs', exist_ok=True)
|
||||
|
||||
|
||||
@@ -8,10 +8,15 @@ from sklearn.linear_model import Ridge
|
||||
from sklearn.metrics import mean_squared_error
|
||||
from sklearn.model_selection import train_test_split
|
||||
from azureml.core.run import Run
|
||||
from sklearn.externals import joblib
|
||||
|
||||
import numpy as np
|
||||
|
||||
# sklearn.externals.joblib is removed in 0.23
|
||||
try:
|
||||
from sklearn.externals import joblib
|
||||
except ImportError:
|
||||
import joblib
|
||||
|
||||
os.makedirs('./outputs', exist_ok=True)
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--data-folder', type=str,
|
||||
|
||||
@@ -1,9 +1,13 @@
|
||||
import pickle
|
||||
import json
|
||||
import numpy as np
|
||||
from sklearn.externals import joblib
|
||||
from sklearn.linear_model import Ridge
|
||||
from azureml.core.model import Model
|
||||
# sklearn.externals.joblib is removed in 0.23
|
||||
try:
|
||||
from sklearn.externals import joblib
|
||||
except ImportError:
|
||||
import joblib
|
||||
|
||||
|
||||
def init():
|
||||
|
||||
@@ -380,6 +380,24 @@
|
||||
"#compute_target.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Delete the DataDriftDetector\n",
|
||||
"\n",
|
||||
"Invoking the `delete()` method on the object deletes the the drift monitor permanently and cannot be undone. You will no longer be able to find it in the UI and the `list()` or `get()` methods. The object on which delete() was called will have its state set to deleted and name suffixed with deleted. The baseline and target datasets and model data that was collected, if any, are not deleted. The compute is not deleted. The DataDrift schedule pipeline is disabled and archived."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"monitor.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
5
index.md
5
index.md
@@ -65,6 +65,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
|
||||
| [Resuming a model](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/ml-frameworks/tensorflow/training/train-tensorflow-resume-training/train-tensorflow-resume-training.ipynb) | Resume a model in TensorFlow from a previously submitted run | MNIST | AML Compute | None | TensorFlow | None |
|
||||
| [Training in Spark](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-in-spark/train-in-spark.ipynb) | Submiting a run on a spark cluster | None | HDI cluster | None | PySpark | None |
|
||||
| [Train on Azure Machine Learning Compute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb) | Submit a run on Azure Machine Learning Compute. | Diabetes | AML Compute | None | None | None |
|
||||
| [Train on Azure Machine Learning Compute Instance](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-computeinstance/train-on-computeinstance.ipynb) | Submit a run on Azure Machine Learning Compute Instance. | Diabetes | Compute Instance | None | None | None |
|
||||
| [Train on local compute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-local/train-on-local.ipynb) | Train a model locally | Diabetes | Local | None | None | None |
|
||||
| [Train in a remote Linux virtual machine](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) | Configure and execute a run | Diabetes | Data Science Virtual Machine | None | None | None |
|
||||
| [Using Tensorboard](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training-with-deep-learning/export-run-history-to-tensorboard/export-run-history-to-tensorboard.ipynb) | Export the run history as Tensorboard logs | None | None | None | TensorFlow | None |
|
||||
@@ -95,6 +96,8 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
|
||||
|:----|:-----|:-------:|:----------------:|:-----------------:|:------------:|:------------:|
|
||||
| [DNN Text Featurization](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb) | Text featurization using DNNs for classification | None | AML Compute | None | None | None |
|
||||
| [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) | | | | | | |
|
||||
| [fairlearn-azureml-mitigation](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/fairlearn-azureml-mitigation.ipynb) | | | | | | |
|
||||
| [upload-fairness-dashboard](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/fairness/upload-fairness-dashboard.ipynb) | | | | | | |
|
||||
| [lightgbm-example](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/gbdt/lightgbm/lightgbm-example.ipynb) | | | | | | |
|
||||
| [azure-ml-with-nvidia-rapids](https://github.com/Azure/MachineLearningNotebooks/blob/master//contrib/RAPIDS/azure-ml-with-nvidia-rapids.ipynb) | | | | | | |
|
||||
| [auto-ml-continuous-retraining](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb) | | | | | | |
|
||||
@@ -121,11 +124,13 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
|
||||
| [train-explain-model-on-amlcompute-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-on-amlcompute-and-deploy.ipynb) | | | | | | |
|
||||
| [training_notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/notebook_runner/training_notebook.ipynb) | | | | | | |
|
||||
| [nyc-taxi-data-regression-model-building](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) | | | | | | |
|
||||
| [pipeline-style-transfer-mpi](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/pipeline-style-transfer/pipeline-style-transfer-mpi.ipynb) | | | | | | |
|
||||
| [authentication-in-azureml](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.ipynb) | | | | | | |
|
||||
| [pong_rllib](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb) | | | | | | |
|
||||
| [cartpole_ci](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-compute-instance/cartpole_ci.ipynb) | | | | | | |
|
||||
| [cartpole_sc](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb) | | | | | | |
|
||||
| [minecraft](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.ipynb) | | | | | | |
|
||||
| [particle](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.ipynb) | | | | | | |
|
||||
| [devenv_setup](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/setup/devenv_setup.ipynb) | | | | | | |
|
||||
| [Logging APIs](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb) | Logging APIs and analyzing results | None | None | None | None | None |
|
||||
| [distributed-cntk-with-custom-docker](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training-with-deep-learning/distributed-cntk-with-custom-docker/distributed-cntk-with-custom-docker.ipynb) | | | | | | |
|
||||
|
||||
@@ -102,7 +102,7 @@
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"This notebook was created using version 1.7.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.10.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -217,6 +217,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"import uuid\n",
|
||||
"from azureml.core.webservice import Webservice\n",
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
@@ -230,8 +231,9 @@
|
||||
"myenv = Environment.get(workspace=ws, name=\"tutorial-env\", version=\"1\")\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)\n",
|
||||
"\n",
|
||||
"service_name = 'sklearn-mnist-svc-' + str(uuid.uuid4())[:4]\n",
|
||||
"service = Model.deploy(workspace=ws, \n",
|
||||
" name='sklearn-mnist-svc', \n",
|
||||
" name=service_name, \n",
|
||||
" models=[model], \n",
|
||||
" inference_config=inference_config, \n",
|
||||
" deployment_config=aciconfig)\n",
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user