mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 01:27:06 -05:00
Compare commits
20 Commits
release_up
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f1aff553c4 | ||
|
|
d195a673e2 | ||
|
|
8dce0fa6fe | ||
|
|
4e8a240a71 | ||
|
|
5b019e28de | ||
|
|
bf4cb1e86c | ||
|
|
eaa7c56590 | ||
|
|
8fc0fa040d | ||
|
|
56e13b0b9a | ||
|
|
785fe3c962 | ||
|
|
3c341f6e9a | ||
|
|
aae88e87ea | ||
|
|
2352e458c7 | ||
|
|
8373b93887 | ||
|
|
f0442166cd | ||
|
|
33ca8c7933 | ||
|
|
3fd1ce8993 | ||
|
|
aa93588190 | ||
|
|
12520400e5 | ||
|
|
35614e83fa |
@@ -103,7 +103,7 @@
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"This notebook was created using version 1.54.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.59.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -1,621 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Unfairness Mitigation with Fairlearn and Azure Machine Learning\n",
|
||||
"**This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio**\n",
|
||||
"\n",
|
||||
"## Table of Contents\n",
|
||||
"\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Loading the Data](#LoadingData)\n",
|
||||
"1. [Training an Unmitigated Model](#UnmitigatedModel)\n",
|
||||
"1. [Mitigation with GridSearch](#Mitigation)\n",
|
||||
"1. [Uploading a Fairness Dashboard to Azure](#AzureUpload)\n",
|
||||
" 1. Registering models\n",
|
||||
" 1. Computing Fairness Metrics\n",
|
||||
" 1. Uploading to Azure\n",
|
||||
"1. [Conclusion](#Conclusion)\n",
|
||||
"\n",
|
||||
"<a id=\"Introduction\"></a>\n",
|
||||
"## Introduction\n",
|
||||
"This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.org) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.org/).\n",
|
||||
"\n",
|
||||
"We will apply the [grid search algorithm](https://fairlearn.org/v0.4.6/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n",
|
||||
"\n",
|
||||
"### Setup\n",
|
||||
"\n",
|
||||
"To use this notebook, an Azure Machine Learning workspace is required.\n",
|
||||
"Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.\n",
|
||||
"This notebook also requires the following packages:\n",
|
||||
"* `azureml-contrib-fairness`\n",
|
||||
"* `fairlearn>=0.6.2` (pre-v0.5.0 will work with minor modifications)\n",
|
||||
"* `joblib`\n",
|
||||
"* `liac-arff`\n",
|
||||
"* `raiwidgets`\n",
|
||||
"\n",
|
||||
"Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install --upgrade scikit-learn>=0.22.1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"LoadingData\"></a>\n",
|
||||
"## Loading the Data\n",
|
||||
"We use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate\n",
|
||||
"from raiwidgets import FairnessDashboard\n",
|
||||
"\n",
|
||||
"from sklearn.compose import ColumnTransformer\n",
|
||||
"from sklearn.impute import SimpleImputer\n",
|
||||
"from sklearn.linear_model import LogisticRegression\n",
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
|
||||
"from sklearn.compose import make_column_selector as selector\n",
|
||||
"from sklearn.pipeline import Pipeline\n",
|
||||
"\n",
|
||||
"import pandas as pd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can now load and inspect the data:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from fairness_nb_utils import fetch_census_dataset\n",
|
||||
"\n",
|
||||
"data = fetch_census_dataset()\n",
|
||||
" \n",
|
||||
"# Extract the items we want\n",
|
||||
"X_raw = data.data\n",
|
||||
"y = (data.target == '>50K') * 1\n",
|
||||
"\n",
|
||||
"X_raw[\"race\"].value_counts().to_dict()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"A = X_raw[['sex','race']]\n",
|
||||
"X_raw = X_raw.drop(labels=['sex', 'race'], axis = 1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(\n",
|
||||
" X_raw, y, A, test_size=0.3, random_state=12345, stratify=y\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Ensure indices are aligned between X, y and A,\n",
|
||||
"# after all the slicing and splitting of DataFrames\n",
|
||||
"# and Series\n",
|
||||
"\n",
|
||||
"X_train = X_train.reset_index(drop=True)\n",
|
||||
"X_test = X_test.reset_index(drop=True)\n",
|
||||
"y_train = y_train.reset_index(drop=True)\n",
|
||||
"y_test = y_test.reset_index(drop=True)\n",
|
||||
"A_train = A_train.reset_index(drop=True)\n",
|
||||
"A_test = A_test.reset_index(drop=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).\n",
|
||||
"\n",
|
||||
"For this preprocessing, we make use of `Pipeline` objects from `sklearn`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"numeric_transformer = Pipeline(\n",
|
||||
" steps=[\n",
|
||||
" (\"impute\", SimpleImputer()),\n",
|
||||
" (\"scaler\", StandardScaler()),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"categorical_transformer = Pipeline(\n",
|
||||
" [\n",
|
||||
" (\"impute\", SimpleImputer(strategy=\"most_frequent\")),\n",
|
||||
" (\"ohe\", OneHotEncoder(handle_unknown=\"ignore\", sparse=False)),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"preprocessor = ColumnTransformer(\n",
|
||||
" transformers=[\n",
|
||||
" (\"num\", numeric_transformer, selector(dtype_exclude=\"category\")),\n",
|
||||
" (\"cat\", categorical_transformer, selector(dtype_include=\"category\")),\n",
|
||||
" ]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"X_train = preprocessor.fit_transform(X_train)\n",
|
||||
"X_test = preprocessor.transform(X_test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"UnmitigatedModel\"></a>\n",
|
||||
"## Training an Unmitigated Model\n",
|
||||
"\n",
|
||||
"So we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)\n",
|
||||
"\n",
|
||||
"unmitigated_predictor.fit(X_train, y_train)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can view this model in the fairness dashboard, and see the disparities which appear:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"FairnessDashboard(sensitive_features=A_test,\n",
|
||||
" y_true=y_test,\n",
|
||||
" y_pred={\"unmitigated\": unmitigated_predictor.predict(X_test)})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.\n",
|
||||
"\n",
|
||||
"Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"Mitigation\"></a>\n",
|
||||
"## Mitigation with GridSearch\n",
|
||||
"\n",
|
||||
"The `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.\n",
|
||||
"\n",
|
||||
"For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True),\n",
|
||||
" constraints=DemographicParity(),\n",
|
||||
" grid_size=71)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.\n",
|
||||
"\n",
|
||||
"The following cell trains a many copies of the underlying estimator, and may take a minute or two to run:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sweep.fit(X_train, y_train,\n",
|
||||
" sensitive_features=A_train.sex)\n",
|
||||
"\n",
|
||||
"# For Fairlearn pre-v0.5.0, need sweep._predictors\n",
|
||||
"predictors = sweep.predictors_"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"errors, disparities = [], []\n",
|
||||
"for predictor in predictors:\n",
|
||||
" error = ErrorRate()\n",
|
||||
" error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)\n",
|
||||
" disparity = DemographicParity()\n",
|
||||
" disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)\n",
|
||||
" \n",
|
||||
" errors.append(error.gamma(predictor.predict)[0])\n",
|
||||
" disparities.append(disparity.gamma(predictor.predict).max())\n",
|
||||
" \n",
|
||||
"all_results = pd.DataFrame( {\"predictor\": predictors, \"error\": errors, \"disparity\": disparities})\n",
|
||||
"\n",
|
||||
"dominant_models_dict = dict()\n",
|
||||
"base_name_format = \"census_gs_model_{0}\"\n",
|
||||
"row_id = 0\n",
|
||||
"for row in all_results.itertuples():\n",
|
||||
" model_name = base_name_format.format(row_id)\n",
|
||||
" errors_for_lower_or_eq_disparity = all_results[\"error\"][all_results[\"disparity\"]<=row.disparity]\n",
|
||||
" if row.error <= errors_for_lower_or_eq_disparity.min():\n",
|
||||
" dominant_models_dict[model_name] = row.predictor\n",
|
||||
" row_id = row_id + 1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions_dominant = {\"census_unmitigated\": unmitigated_predictor.predict(X_test)}\n",
|
||||
"models_dominant = {\"census_unmitigated\": unmitigated_predictor}\n",
|
||||
"for name, predictor in dominant_models_dict.items():\n",
|
||||
" value = predictor.predict(X_test)\n",
|
||||
" predictions_dominant[name] = value\n",
|
||||
" models_dominant[name] = predictor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"FairnessDashboard(sensitive_features=A_test, \n",
|
||||
" y_true=y_test.tolist(),\n",
|
||||
" y_pred=predictions_dominant)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute \"sex\"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.\n",
|
||||
"\n",
|
||||
"By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"AzureUpload\"></a>\n",
|
||||
"## Uploading a Fairness Dashboard to Azure\n",
|
||||
"\n",
|
||||
"Uploading a fairness dashboard to Azure is a two stage process. The `FairnessDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore:\n",
|
||||
"1. Register the dominant models\n",
|
||||
"1. Precompute all the required metrics\n",
|
||||
"1. Upload to Azure\n",
|
||||
"\n",
|
||||
"Before that, we need to connect to Azure Machine Learning Studio:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace, Experiment, Model\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"ws.get_details()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"RegisterModels\"></a>\n",
|
||||
"### Registering Models\n",
|
||||
"\n",
|
||||
"The fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `<name>:<version>` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.makedirs('models', exist_ok=True)\n",
|
||||
"def register_model(name, model):\n",
|
||||
" print(\"Registering \", name)\n",
|
||||
" model_path = \"models/{0}.pkl\".format(name)\n",
|
||||
" joblib.dump(value=model, filename=model_path)\n",
|
||||
" registered_model = Model.register(model_path=model_path,\n",
|
||||
" model_name=name,\n",
|
||||
" workspace=ws)\n",
|
||||
" print(\"Registered \", registered_model.id)\n",
|
||||
" return registered_model.id\n",
|
||||
"\n",
|
||||
"model_name_id_mapping = dict()\n",
|
||||
"for name, model in models_dominant.items():\n",
|
||||
" m_id = register_model(name, model)\n",
|
||||
" model_name_id_mapping[name] = m_id"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, produce new predictions dictionaries, with the updated names:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions_dominant_ids = dict()\n",
|
||||
"for name, y_pred in predictions_dominant.items():\n",
|
||||
" predictions_dominant_ids[model_name_id_mapping[name]] = y_pred"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"PrecomputeMetrics\"></a>\n",
|
||||
"### Precomputing Metrics\n",
|
||||
"\n",
|
||||
"We create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sf = { 'sex': A_test.sex, 'race': A_test.race }\n",
|
||||
"\n",
|
||||
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"dash_dict = _create_group_metric_set(y_true=y_test,\n",
|
||||
" predictions=predictions_dominant_ids,\n",
|
||||
" sensitive_features=sf,\n",
|
||||
" prediction_type='binary_classification')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"DashboardUpload\"></a>\n",
|
||||
"### Uploading the Dashboard\n",
|
||||
"\n",
|
||||
"Now, we import our `contrib` package which contains the routine to perform the upload:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can create an Experiment, then a Run, and upload our dashboard to it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"exp = Experiment(ws, \"Test_Fairlearn_GridSearch_Census_Demo\")\n",
|
||||
"print(exp)\n",
|
||||
"\n",
|
||||
"run = exp.start_logging()\n",
|
||||
"try:\n",
|
||||
" dashboard_title = \"Dominant Models from GridSearch\"\n",
|
||||
" upload_id = upload_dashboard_dictionary(run,\n",
|
||||
" dash_dict,\n",
|
||||
" dashboard_name=dashboard_title)\n",
|
||||
" print(\"\\nUploaded to id: {0}\\n\".format(upload_id))\n",
|
||||
"\n",
|
||||
" downloaded_dict = download_dashboard_by_upload_id(run, upload_id)\n",
|
||||
"finally:\n",
|
||||
" run.complete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The dashboard can be viewed in the Run Details page.\n",
|
||||
"\n",
|
||||
"Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(dash_dict == downloaded_dict)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"Conclusion\"></a>\n",
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"In this notebook we have demonstrated how to use the `GridSearch` algorithm from Fairlearn to generate a collection of models, and then present them in the fairness dashboard in Azure Machine Learning Studio. Please remember that this notebook has not attempted to discuss the many considerations which should be part of any approach to unfairness mitigation. The [Fairlearn website](http://fairlearn.org/) provides that discussion"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "riedgar"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
name: fairlearn-azureml-mitigation
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-contrib-fairness
|
||||
- fairlearn>=0.6.2,<=0.7.0
|
||||
- joblib
|
||||
- liac-arff
|
||||
- raiwidgets~=0.28.0
|
||||
- itsdangerous==2.0.1
|
||||
- markupsafe<2.1.0
|
||||
- protobuf==3.20.0
|
||||
- numpy<1.24.0
|
||||
@@ -1,111 +0,0 @@
|
||||
# ---------------------------------------------------------
|
||||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# ---------------------------------------------------------
|
||||
|
||||
"""Utilities for azureml-contrib-fairness notebooks."""
|
||||
|
||||
import arff
|
||||
from collections import OrderedDict
|
||||
from contextlib import closing
|
||||
import gzip
|
||||
import pandas as pd
|
||||
from sklearn.datasets import fetch_openml
|
||||
from sklearn.utils import Bunch
|
||||
import time
|
||||
|
||||
|
||||
def fetch_openml_with_retries(data_id, max_retries=4, retry_delay=60):
|
||||
"""Fetch a given dataset from OpenML with retries as specified."""
|
||||
for i in range(max_retries):
|
||||
try:
|
||||
print("Download attempt {0} of {1}".format(i + 1, max_retries))
|
||||
data = fetch_openml(data_id=data_id, as_frame=True)
|
||||
break
|
||||
except Exception as e: # noqa: B902
|
||||
print("Download attempt failed with exception:")
|
||||
print(e)
|
||||
if i + 1 != max_retries:
|
||||
print("Will retry after {0} seconds".format(retry_delay))
|
||||
time.sleep(retry_delay)
|
||||
retry_delay = retry_delay * 2
|
||||
else:
|
||||
raise RuntimeError("Unable to download dataset from OpenML")
|
||||
|
||||
return data
|
||||
|
||||
|
||||
_categorical_columns = [
|
||||
'workclass',
|
||||
'education',
|
||||
'marital-status',
|
||||
'occupation',
|
||||
'relationship',
|
||||
'race',
|
||||
'sex',
|
||||
'native-country'
|
||||
]
|
||||
|
||||
|
||||
def fetch_census_dataset():
|
||||
"""Fetch the Adult Census Dataset.
|
||||
|
||||
This uses a particular URL for the Adult Census dataset. The code
|
||||
is a simplified version of fetch_openml() in sklearn.
|
||||
|
||||
The data are copied from:
|
||||
https://openml.org/data/v1/download/1595261.gz
|
||||
(as of 2021-03-31)
|
||||
"""
|
||||
try:
|
||||
from urllib import urlretrieve
|
||||
except ImportError:
|
||||
from urllib.request import urlretrieve
|
||||
|
||||
filename = "1595261.gz"
|
||||
data_url = "https://rainotebookscdn.blob.core.windows.net/datasets/"
|
||||
|
||||
remaining_attempts = 5
|
||||
sleep_duration = 10
|
||||
while remaining_attempts > 0:
|
||||
try:
|
||||
urlretrieve(data_url + filename, filename)
|
||||
|
||||
http_stream = gzip.GzipFile(filename=filename, mode='rb')
|
||||
|
||||
with closing(http_stream):
|
||||
def _stream_generator(response):
|
||||
for line in response:
|
||||
yield line.decode('utf-8')
|
||||
|
||||
stream = _stream_generator(http_stream)
|
||||
data = arff.load(stream)
|
||||
except Exception as exc: # noqa: B902
|
||||
remaining_attempts -= 1
|
||||
print("Error downloading dataset from {} ({} attempt(s) remaining)"
|
||||
.format(data_url, remaining_attempts))
|
||||
print(exc)
|
||||
time.sleep(sleep_duration)
|
||||
sleep_duration *= 2
|
||||
continue
|
||||
else:
|
||||
# dataset successfully downloaded
|
||||
break
|
||||
else:
|
||||
raise Exception("Could not retrieve dataset from {}.".format(data_url))
|
||||
|
||||
attributes = OrderedDict(data['attributes'])
|
||||
arff_columns = list(attributes)
|
||||
|
||||
raw_df = pd.DataFrame(data=data['data'], columns=arff_columns)
|
||||
|
||||
target_column_name = 'class'
|
||||
target = raw_df.pop(target_column_name)
|
||||
for col_name in _categorical_columns:
|
||||
dtype = pd.api.types.CategoricalDtype(attributes[col_name])
|
||||
raw_df[col_name] = raw_df[col_name].astype(dtype, copy=False)
|
||||
|
||||
result = Bunch()
|
||||
result.data = raw_df
|
||||
result.target = target
|
||||
|
||||
return result
|
||||
@@ -1,545 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Upload a Fairness Dashboard to Azure Machine Learning Studio\n",
|
||||
"**This notebook shows how to generate and upload a fairness assessment dashboard from Fairlearn to AzureML Studio**\n",
|
||||
"\n",
|
||||
"## Table of Contents\n",
|
||||
"\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Loading the Data](#LoadingData)\n",
|
||||
"1. [Processing the Data](#ProcessingData)\n",
|
||||
"1. [Training Models](#TrainingModels)\n",
|
||||
"1. [Logging in to AzureML](#LoginAzureML)\n",
|
||||
"1. [Registering the Models](#RegisterModels)\n",
|
||||
"1. [Using the Fairness Dashboard](#LocalDashboard)\n",
|
||||
"1. [Uploading a Fairness Dashboard to Azure](#AzureUpload)\n",
|
||||
" 1. Computing Fairness Metrics\n",
|
||||
" 1. Uploading to Azure\n",
|
||||
"1. [Conclusion](#Conclusion)\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"<a id=\"Introduction\"></a>\n",
|
||||
"## Introduction\n",
|
||||
"\n",
|
||||
"In this notebook, we walk through a simple example of using the `azureml-contrib-fairness` package to upload a collection of fairness statistics for a fairness dashboard. It is an example of integrating the [open source Fairlearn package](https://www.github.com/fairlearn/fairlearn) with Azure Machine Learning. This is not an example of fairness analysis or mitigation - this notebook simply shows how to get a fairness dashboard into the Azure Machine Learning portal. We will load the data and train a couple of simple models. We will then use Fairlearn to generate data for a Fairness dashboard, which we can upload to Azure Machine Learning portal and view there.\n",
|
||||
"\n",
|
||||
"### Setup\n",
|
||||
"\n",
|
||||
"To use this notebook, an Azure Machine Learning workspace is required.\n",
|
||||
"Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.\n",
|
||||
"This notebook also requires the following packages:\n",
|
||||
"* `azureml-contrib-fairness`\n",
|
||||
"* `fairlearn>=0.6.2` (also works for pre-v0.5.0 with slight modifications)\n",
|
||||
"* `joblib`\n",
|
||||
"* `liac-arff`\n",
|
||||
"* `raiwidgets`\n",
|
||||
"\n",
|
||||
"Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# !pip install --upgrade scikit-learn>=0.22.1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"LoadingData\"></a>\n",
|
||||
"## Loading the Data\n",
|
||||
"We use the well-known `adult` census dataset, which we fetch from the OpenML website. We start with a fairly unremarkable set of imports:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn import svm\n",
|
||||
"from sklearn.compose import ColumnTransformer\n",
|
||||
"from sklearn.impute import SimpleImputer\n",
|
||||
"from sklearn.linear_model import LogisticRegression\n",
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
|
||||
"from sklearn.compose import make_column_selector as selector\n",
|
||||
"from sklearn.pipeline import Pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we can load the data:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from fairness_nb_utils import fetch_census_dataset\n",
|
||||
"\n",
|
||||
"data = fetch_census_dataset()\n",
|
||||
" \n",
|
||||
"# Extract the items we want\n",
|
||||
"X_raw = data.data\n",
|
||||
"y = (data.target == '>50K') * 1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can take a look at some of the data. For example, the next cells shows the counts of the different races identified in the dataset:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(X_raw[\"race\"].value_counts().to_dict())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"ProcessingData\"></a>\n",
|
||||
"## Processing the Data\n",
|
||||
"\n",
|
||||
"With the data loaded, we process it for our needs. First, we extract the sensitive features of interest into `A` (conventionally used in the literature) and leave the rest of the feature data in `X_raw`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"A = X_raw[['sex','race']]\n",
|
||||
"X_raw = X_raw.drop(labels=['sex', 'race'],axis = 1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(\n",
|
||||
" X_raw, y, A, test_size=0.3, random_state=12345, stratify=y\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Ensure indices are aligned between X, y and A,\n",
|
||||
"# after all the slicing and splitting of DataFrames\n",
|
||||
"# and Series\n",
|
||||
"\n",
|
||||
"X_train = X_train.reset_index(drop=True)\n",
|
||||
"X_test = X_test.reset_index(drop=True)\n",
|
||||
"y_train = y_train.reset_index(drop=True)\n",
|
||||
"y_test = y_test.reset_index(drop=True)\n",
|
||||
"A_train = A_train.reset_index(drop=True)\n",
|
||||
"A_test = A_test.reset_index(drop=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).\n",
|
||||
"\n",
|
||||
"For this preprocessing, we make use of `Pipeline` objects from `sklearn`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"numeric_transformer = Pipeline(\n",
|
||||
" steps=[\n",
|
||||
" (\"impute\", SimpleImputer()),\n",
|
||||
" (\"scaler\", StandardScaler()),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"categorical_transformer = Pipeline(\n",
|
||||
" [\n",
|
||||
" (\"impute\", SimpleImputer(strategy=\"most_frequent\")),\n",
|
||||
" (\"ohe\", OneHotEncoder(handle_unknown=\"ignore\", sparse=False)),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"preprocessor = ColumnTransformer(\n",
|
||||
" transformers=[\n",
|
||||
" (\"num\", numeric_transformer, selector(dtype_exclude=\"category\")),\n",
|
||||
" (\"cat\", categorical_transformer, selector(dtype_include=\"category\")),\n",
|
||||
" ]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"X_train = preprocessor.fit_transform(X_train)\n",
|
||||
"X_test = preprocessor.transform(X_test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"TrainingModels\"></a>\n",
|
||||
"## Training Models\n",
|
||||
"\n",
|
||||
"We now train a couple of different models on our data. The `adult` census dataset is a classification problem - the goal is to predict whether a particular individual exceeds an income threshold. For the purpose of generating a dashboard to upload, it is sufficient to train two basic classifiers. First, a logistic regression classifier:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"lr_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)\n",
|
||||
"\n",
|
||||
"lr_predictor.fit(X_train, y_train)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"And for comparison, a support vector classifier:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"svm_predictor = svm.SVC()\n",
|
||||
"\n",
|
||||
"svm_predictor.fit(X_train, y_train)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"LoginAzureML\"></a>\n",
|
||||
"## Logging in to AzureML\n",
|
||||
"\n",
|
||||
"With our two classifiers trained, we can log into our AzureML workspace:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace, Experiment, Model\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"ws.get_details()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"RegisterModels\"></a>\n",
|
||||
"## Registering the Models\n",
|
||||
"\n",
|
||||
"Next, we register our models. By default, the subroutine which uploads the models checks that the names provided correspond to registered models in the workspace. We define a utility routine to do the registering:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.makedirs('models', exist_ok=True)\n",
|
||||
"def register_model(name, model):\n",
|
||||
" print(\"Registering \", name)\n",
|
||||
" model_path = \"models/{0}.pkl\".format(name)\n",
|
||||
" joblib.dump(value=model, filename=model_path)\n",
|
||||
" registered_model = Model.register(model_path=model_path,\n",
|
||||
" model_name=name,\n",
|
||||
" workspace=ws)\n",
|
||||
" print(\"Registered \", registered_model.id)\n",
|
||||
" return registered_model.id"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, we register the models. For convenience in subsequent method calls, we store the results in a dictionary, which maps the `id` of the registered model (a string in `name:version` format) to the predictor itself:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model_dict = {}\n",
|
||||
"\n",
|
||||
"lr_reg_id = register_model(\"fairness_linear_regression\", lr_predictor)\n",
|
||||
"model_dict[lr_reg_id] = lr_predictor\n",
|
||||
"svm_reg_id = register_model(\"fairness_svm\", svm_predictor)\n",
|
||||
"model_dict[svm_reg_id] = svm_predictor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"LocalDashboard\"></a>\n",
|
||||
"## Using the Fairlearn Dashboard\n",
|
||||
"\n",
|
||||
"We can now examine the fairness of the two models we have training, both as a function of race and (binary) sex. Before uploading the dashboard to the AzureML portal, we will first instantiate a local instance of the Fairlearn dashboard.\n",
|
||||
"\n",
|
||||
"Regardless of the viewing location, the dashboard is based on three things - the true values, the model predictions and the sensitive feature values. The dashboard can use predictions from multiple models and multiple sensitive features if desired (as we are doing here).\n",
|
||||
"\n",
|
||||
"Our first step is to generate a dictionary mapping the `id` of the registered model to the corresponding array of predictions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ys_pred = {}\n",
|
||||
"for n, p in model_dict.items():\n",
|
||||
" ys_pred[n] = p.predict(X_test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can examine these predictions in a locally invoked Fairlearn dashboard. This can be compared to the dashboard uploaded to the portal (in the next section):"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from raiwidgets import FairnessDashboard\n",
|
||||
"\n",
|
||||
"FairnessDashboard(sensitive_features=A_test, \n",
|
||||
" y_true=y_test.tolist(),\n",
|
||||
" y_pred=ys_pred)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"AzureUpload\"></a>\n",
|
||||
"## Uploading a Fairness Dashboard to Azure\n",
|
||||
"\n",
|
||||
"Uploading a fairness dashboard to Azure is a two stage process. The `FairnessDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. The required stages are therefore:\n",
|
||||
"1. Precompute all the required metrics\n",
|
||||
"1. Upload to Azure\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Computing Fairness Metrics\n",
|
||||
"We use Fairlearn to create a dictionary which contains all the data required to display a dashboard. This includes both the raw data (true values, predicted values and sensitive features), and also the fairness metrics. The API is similar to that used to invoke the Dashboard locally. However, there are a few minor changes to the API, and the type of problem being examined (binary classification, regression etc.) needs to be specified explicitly:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"sf = { 'Race': A_test.race, 'Sex': A_test.sex }\n",
|
||||
"\n",
|
||||
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
|
||||
"\n",
|
||||
"dash_dict = _create_group_metric_set(y_true=y_test,\n",
|
||||
" predictions=ys_pred,\n",
|
||||
" sensitive_features=sf,\n",
|
||||
" prediction_type='binary_classification')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `_create_group_metric_set()` method is currently underscored since its exact design is not yet final in Fairlearn."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Uploading to Azure\n",
|
||||
"\n",
|
||||
"We can now import the `azureml.contrib.fairness` package itself. We will round-trip the data, so there are two required subroutines:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, we can upload the generated dictionary to AzureML. The upload method requires a run, so we first create an experiment and a run. The uploaded dashboard can be seen on the corresponding Run Details page in AzureML Studio. For completeness, we also download the dashboard dictionary which we uploaded."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"exp = Experiment(ws, \"notebook-01\")\n",
|
||||
"print(exp)\n",
|
||||
"\n",
|
||||
"run = exp.start_logging()\n",
|
||||
"try:\n",
|
||||
" dashboard_title = \"Sample notebook upload\"\n",
|
||||
" upload_id = upload_dashboard_dictionary(run,\n",
|
||||
" dash_dict,\n",
|
||||
" dashboard_name=dashboard_title)\n",
|
||||
" print(\"\\nUploaded to id: {0}\\n\".format(upload_id))\n",
|
||||
"\n",
|
||||
" downloaded_dict = download_dashboard_by_upload_id(run, upload_id)\n",
|
||||
"finally:\n",
|
||||
" run.complete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, we can verify that the dashboard dictionary which we downloaded matches our upload:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(dash_dict == downloaded_dict)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"Conclusion\"></a>\n",
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"In this notebook we have demonstrated how to generate and upload a fairness dashboard to AzureML Studio. We have not discussed how to analyse the results and apply mitigations. Those topics will be covered elsewhere."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "riedgar"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
name: upload-fairness-dashboard
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-contrib-fairness
|
||||
- fairlearn>=0.6.2,<=0.7.0
|
||||
- joblib
|
||||
- liac-arff
|
||||
- raiwidgets~=0.28.0
|
||||
- itsdangerous==2.0.1
|
||||
- markupsafe<2.1.0
|
||||
- protobuf==3.20.0
|
||||
- numpy<1.24.0
|
||||
@@ -9,7 +9,6 @@ As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) not
|
||||
* [train-on-amlcompute](./training/train-on-amlcompute): Use a 1-n node Azure ML managed compute cluster for remote runs on Azure CPU or GPU infrastructure.
|
||||
* [train-on-remote-vm](./training/train-on-remote-vm): Use Data Science Virtual Machine as a target for remote runs.
|
||||
* [logging-api](./track-and-monitor-experiments/logging-api): Learn about the details of logging metrics to run history.
|
||||
* [production-deploy-to-aks](./deployment/production-deploy-to-aks) Deploy a model to production at scale on Azure Kubernetes Service.
|
||||
* [enable-app-insights-in-production-service](./deployment/enable-app-insights-in-production-service) Learn how to use App Insights with production web service.
|
||||
|
||||
Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
|
||||
|
||||
@@ -5,24 +5,22 @@ channels:
|
||||
- main
|
||||
dependencies:
|
||||
# The python interpreter version.
|
||||
# Azure ML only supports 3.7.0 and later.
|
||||
# Azure ML only supports 3.8 and later.
|
||||
- pip==22.3.1
|
||||
- python>=3.9,<3.10
|
||||
- python>=3.10,<3.11
|
||||
- holidays==0.29
|
||||
- scipy==1.10.1
|
||||
- tqdm==4.66.1
|
||||
# TODO: Remove this requirement when azureml-responsibleai will
|
||||
# upgrade responsibleai to version to >=0.30.0
|
||||
- scikit-learn<1.1
|
||||
|
||||
- pip:
|
||||
# Required packages for AzureML execution, history, and data preparation.
|
||||
- azureml-widgets~=1.54.0
|
||||
- azureml-defaults~=1.54.0
|
||||
- -r https://automlsdkdataresources.blob.core.windows.net/validated-requirements/1.54.0/validated_win32_requirements.txt [--no-deps]
|
||||
- azureml-widgets~=1.59.0
|
||||
- azureml-defaults~=1.59.0
|
||||
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.59.0/validated_win32_requirements.txt [--no-deps]
|
||||
- matplotlib==3.7.1
|
||||
- xgboost==1.3.3
|
||||
- xgboost==1.5.2
|
||||
- prophet==1.1.4
|
||||
- pandas==1.3.5
|
||||
- cmdstanpy==1.1.0
|
||||
- onnx==1.16.1
|
||||
- setuptools-git==1.2
|
||||
- spacy==3.7.4
|
||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-3.7.1.tar.gz
|
||||
|
||||
@@ -7,15 +7,12 @@ dependencies:
|
||||
# The python interpreter version.
|
||||
# Azure ML only supports 3.7 and later.
|
||||
- pip==22.3.1
|
||||
- python>=3.9,<3.10
|
||||
- python>=3.10,<3.11
|
||||
- matplotlib==3.7.1
|
||||
- numpy>=1.21.6,<=1.23.5
|
||||
- urllib3==1.26.7
|
||||
- scipy==1.10.1
|
||||
# TODO: Upgrade this requirement to 1.1.3 when azureml-responsibleai will
|
||||
# upgrade responsibleai to version to >=0.30.0
|
||||
- scikit-learn<1.1
|
||||
- py-xgboost<=1.3.3
|
||||
- scikit-learn==1.5.1
|
||||
- holidays==0.29
|
||||
- pytorch::pytorch=1.11.0
|
||||
- cudatoolkit=10.1.243
|
||||
@@ -23,10 +20,11 @@ dependencies:
|
||||
|
||||
- pip:
|
||||
# Required packages for AzureML execution, history, and data preparation.
|
||||
- azureml-widgets~=1.54.0
|
||||
- azureml-defaults~=1.54.0
|
||||
- azureml-widgets~=1.59.0
|
||||
- azureml-defaults~=1.59.0
|
||||
- pytorch-transformers==1.0.0
|
||||
- spacy==2.2.4
|
||||
- spacy==3.7.4
|
||||
- xgboost==1.5.2
|
||||
- prophet==1.1.4
|
||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||
- -r https://automlsdkdataresources.blob.core.windows.net/validated-requirements/1.54.0/validated_linux_requirements.txt [--no-deps]
|
||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-3.7.1.tar.gz
|
||||
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.59.0/validated_linux_requirements.txt [--no-deps]
|
||||
|
||||
@@ -7,22 +7,20 @@ dependencies:
|
||||
# The python interpreter version.
|
||||
# Currently Azure ML only supports 3.7 and later.
|
||||
- pip==22.3.1
|
||||
- python>=3.9,<3.10
|
||||
- python>=3.10,<3.11
|
||||
- numpy>=1.21.6,<=1.23.5
|
||||
- scipy==1.10.1
|
||||
# TODO: Upgrade this requirement to 1.1.3 when azureml-responsibleai will
|
||||
# upgrade responsibleai to version to >=0.30.0
|
||||
- scikit-learn<1.1
|
||||
- scikit-learn==1.5.1
|
||||
- holidays==0.29
|
||||
|
||||
- pip:
|
||||
# Required packages for AzureML execution, history, and data preparation.
|
||||
- azureml-widgets~=1.54.0
|
||||
- azureml-defaults~=1.54.0
|
||||
- azureml-widgets~=1.59.0
|
||||
- azureml-defaults~=1.59.0
|
||||
- pytorch-transformers==1.0.0
|
||||
- spacy==2.2.4
|
||||
- prophet==1.1.4
|
||||
- xgboost==1.3.3
|
||||
- xgboost==1.5.2
|
||||
- spacy==3.7.4
|
||||
- matplotlib==3.7.1
|
||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||
- -r https://automlsdkdataresources.blob.core.windows.net/validated-requirements/1.54.0/validated_darwin_requirements.txt [--no-deps]
|
||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-3.7.1.tar.gz
|
||||
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.59.0/validated_darwin_requirements.txt [--no-deps]
|
||||
|
||||
@@ -93,7 +93,8 @@
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"from azureml.core.dataset import Dataset\n",
|
||||
"from azureml.train.automl import AutoMLConfig\n",
|
||||
"from azureml.interpret import ExplanationClient"
|
||||
"from azureml.interpret import ExplanationClient\n",
|
||||
"from azureml.data.datapath import DataPath"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -266,10 +267,12 @@
|
||||
"pd.DataFrame(data).to_csv(\"data/train_data.csv\", index=False)\n",
|
||||
"\n",
|
||||
"ds = ws.get_default_datastore()\n",
|
||||
"ds.upload(\n",
|
||||
" src_dir=\"./data\", target_path=\"bankmarketing\", overwrite=True, show_progress=True\n",
|
||||
"target = DataPath(\n",
|
||||
" datastore=ds, path_on_datastore=\"bankmarketing/train_data.csv\", name=\"bankmarketing\"\n",
|
||||
")\n",
|
||||
"Dataset.File.upload_directory(\n",
|
||||
" src_dir=\"./data\", target=target, overwrite=True, show_progress=True\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Upload the training data as a tabular dataset for access during training on remote compute\n",
|
||||
"train_data = Dataset.Tabular.from_delimited_files(\n",
|
||||
@@ -1090,7 +1093,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.12"
|
||||
"version": "3.10.14"
|
||||
},
|
||||
"nteract": {
|
||||
"version": "nteract-front-end@1.0.0"
|
||||
@@ -1104,5 +1107,5 @@
|
||||
"task": "Classification"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-classification-bank-marketing-all-features
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-classification-credit-card-fraud
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,609 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Automated Machine Learning\n",
|
||||
"_**Text Classification Using Deep Learning**_\n",
|
||||
"\n",
|
||||
"## Contents\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Setup](#Setup)\n",
|
||||
"1. [Data](#Data)\n",
|
||||
"1. [Train](#Train)\n",
|
||||
"1. [Evaluate](#Evaluate)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Introduction\n",
|
||||
"This notebook demonstrates classification with text data using deep learning in AutoML.\n",
|
||||
"\n",
|
||||
"AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data. Depending on the compute cluster the user provides, AutoML tried out Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used, and Bidirectional Long-Short Term neural network (BiLSTM) when a CPU compute is used, thereby optimizing the choice of DNN for the uesr's setup.\n",
|
||||
"\n",
|
||||
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||
"\n",
|
||||
"Notebook synopsis:\n",
|
||||
"\n",
|
||||
"1. Creating an Experiment in an existing Workspace\n",
|
||||
"2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification\n",
|
||||
"3. Registering the best model for future use\n",
|
||||
"4. Evaluating the final model on a test set"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"import logging\n",
|
||||
"import os\n",
|
||||
"import shutil\n",
|
||||
"\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core.experiment import Experiment\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"from azureml.core.dataset import Dataset\n",
|
||||
"from azureml.core.compute import AmlCompute\n",
|
||||
"from azureml.core.compute import ComputeTarget\n",
|
||||
"from azureml.core.run import Run\n",
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"from helper import run_inference, get_result_df\n",
|
||||
"from azureml.train.automl import AutoMLConfig\n",
|
||||
"from sklearn.datasets import fetch_20newsgroups"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# Choose an experiment name.\n",
|
||||
"experiment_name = \"automl-classification-text-dnn\"\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
"output = {}\n",
|
||||
"output[\"Subscription ID\"] = ws.subscription_id\n",
|
||||
"output[\"Workspace Name\"] = ws.name\n",
|
||||
"output[\"Resource Group\"] = ws.resource_group\n",
|
||||
"output[\"Location\"] = ws.location\n",
|
||||
"output[\"Experiment Name\"] = experiment.name\n",
|
||||
"output[\"SDK Version\"] = azureml.core.VERSION\n",
|
||||
"pd.set_option(\"display.max_colwidth\", None)\n",
|
||||
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up a compute cluster\n",
|
||||
"This section uses a user-provided compute cluster (named \"dnntext-cluster\" in this example). If a cluster with this name does not exist in the user's workspace, the below code will create a new cluster. You can choose the parameters of the cluster as mentioned in the comments.\n",
|
||||
"\n",
|
||||
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
|
||||
"\n",
|
||||
"Whether you provide/select a CPU or GPU cluster, AutoML will choose the appropriate DNN for that setup - BiLSTM or BERT text featurizer will be included in the candidate featurizers on CPU and GPU respectively. If your goal is to obtain the most accurate model, we recommend you use GPU clusters since BERT featurizers usually outperform BiLSTM featurizers."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"num_nodes = 2\n",
|
||||
"\n",
|
||||
"# Choose a name for your cluster.\n",
|
||||
"amlcompute_cluster_name = \"dnntext-cluster\"\n",
|
||||
"\n",
|
||||
"# Verify that cluster does not exist already\n",
|
||||
"try:\n",
|
||||
" compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n",
|
||||
" print(\"Found existing cluster, use it.\")\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" compute_config = AmlCompute.provisioning_configuration(\n",
|
||||
" vm_size=\"Standard_NC6s_v3\", # CPU for BiLSTM, such as \"STANDARD_D2_V2\"\n",
|
||||
" # To use BERT (this is recommended for best performance), select a GPU such as \"Standard_NC6s_v3\"\n",
|
||||
" # or similar GPU option\n",
|
||||
" # available in your workspace\n",
|
||||
" idle_seconds_before_scaledown=60,\n",
|
||||
" max_nodes=num_nodes,\n",
|
||||
" )\n",
|
||||
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n",
|
||||
"\n",
|
||||
"compute_target.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Get data\n",
|
||||
"For this notebook we will use 20 Newsgroups data from scikit-learn. We filter the data to contain four classes and take a sample as training data. Please note that for accuracy improvement, more data is needed. For this notebook we provide a small-data example so that you can use this template to use with your larger sized data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data_dir = \"text-dnn-data\" # Local directory to store data\n",
|
||||
"blobstore_datadir = data_dir # Blob store directory to store data in\n",
|
||||
"target_column_name = \"y\"\n",
|
||||
"feature_column_name = \"X\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def get_20newsgroups_data():\n",
|
||||
" \"\"\"Fetches 20 Newsgroups data from scikit-learn\n",
|
||||
" Returns them in form of pandas dataframes\n",
|
||||
" \"\"\"\n",
|
||||
" remove = (\"headers\", \"footers\", \"quotes\")\n",
|
||||
" categories = [\n",
|
||||
" \"rec.sport.baseball\",\n",
|
||||
" \"rec.sport.hockey\",\n",
|
||||
" \"comp.graphics\",\n",
|
||||
" \"sci.space\",\n",
|
||||
" ]\n",
|
||||
"\n",
|
||||
" data = fetch_20newsgroups(\n",
|
||||
" subset=\"train\",\n",
|
||||
" categories=categories,\n",
|
||||
" shuffle=True,\n",
|
||||
" random_state=42,\n",
|
||||
" remove=remove,\n",
|
||||
" )\n",
|
||||
" data = pd.DataFrame(\n",
|
||||
" {feature_column_name: data.data, target_column_name: data.target}\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" data_train = data[:200]\n",
|
||||
" data_test = data[200:300]\n",
|
||||
"\n",
|
||||
" data_train = remove_blanks_20news(\n",
|
||||
" data_train, feature_column_name, target_column_name\n",
|
||||
" )\n",
|
||||
" data_test = remove_blanks_20news(data_test, feature_column_name, target_column_name)\n",
|
||||
"\n",
|
||||
" return data_train, data_test\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def remove_blanks_20news(data, feature_column_name, target_column_name):\n",
|
||||
"\n",
|
||||
" for index, row in data.iterrows():\n",
|
||||
" data.at[index, feature_column_name] = (\n",
|
||||
" row[feature_column_name].replace(\"\\n\", \" \").strip()\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
" data = data[data[feature_column_name] != \"\"]\n",
|
||||
"\n",
|
||||
" return data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Fetch data and upload to datastore for use in training"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data_train, data_test = get_20newsgroups_data()\n",
|
||||
"\n",
|
||||
"if not os.path.isdir(data_dir):\n",
|
||||
" os.mkdir(data_dir)\n",
|
||||
"\n",
|
||||
"train_data_fname = data_dir + \"/train_data.csv\"\n",
|
||||
"test_data_fname = data_dir + \"/test_data.csv\"\n",
|
||||
"\n",
|
||||
"data_train.to_csv(train_data_fname, index=False)\n",
|
||||
"data_test.to_csv(test_data_fname, index=False)\n",
|
||||
"\n",
|
||||
"datastore = ws.get_default_datastore()\n",
|
||||
"datastore.upload(src_dir=data_dir, target_path=blobstore_datadir, overwrite=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"train_dataset = Dataset.Tabular.from_delimited_files(\n",
|
||||
" path=[(datastore, blobstore_datadir + \"/train_data.csv\")]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Prepare AutoML run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This notebook uses the blocked_models parameter to exclude some models that can take a longer time to train on some text datasets. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"automl_settings = {\n",
|
||||
" \"experiment_timeout_minutes\": 30,\n",
|
||||
" \"primary_metric\": \"accuracy\",\n",
|
||||
" \"max_concurrent_iterations\": num_nodes,\n",
|
||||
" \"max_cores_per_iteration\": -1,\n",
|
||||
" \"enable_dnn\": True,\n",
|
||||
" \"enable_early_stopping\": True,\n",
|
||||
" \"validation_size\": 0.3,\n",
|
||||
" \"verbosity\": logging.INFO,\n",
|
||||
" \"enable_voting_ensemble\": False,\n",
|
||||
" \"enable_stack_ensemble\": False,\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(\n",
|
||||
" task=\"classification\",\n",
|
||||
" debug_log=\"automl_errors.log\",\n",
|
||||
" compute_target=compute_target,\n",
|
||||
" training_data=train_dataset,\n",
|
||||
" label_column_name=target_column_name,\n",
|
||||
" blocked_models=[\"LightGBM\", \"XGBoostClassifier\"],\n",
|
||||
" **automl_settings,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Submit AutoML Run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"automl_run = experiment.submit(automl_config, show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Retrieve the Best Model\n",
|
||||
"Below we select the best model pipeline from our iterations, use it to test on test data on the same compute cluster."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For local inferencing, you can load the model locally via. the method `remote_run.get_output()`. For more information on the arguments expected by this method, you can run `remote_run.get_output??`.\n",
|
||||
"Note that when the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your MachineLearningNotebooks folder here: \"MachineLearningNotebooks\\how-to-use-azureml\\automated-machine-learning\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Retrieve the best Run object\n",
|
||||
"best_run = automl_run.get_best_child()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can now see what text transformations are used to convert text data to features for this dataset, including deep learning transformations based on BiLSTM or Transformer (BERT is one implementation of a Transformer) models."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Download the featurization summary JSON file locally\n",
|
||||
"best_run.download_file(\n",
|
||||
" \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Render the JSON as a pandas DataFrame\n",
|
||||
"with open(\"featurization_summary.json\", \"r\") as f:\n",
|
||||
" records = json.load(f)\n",
|
||||
"\n",
|
||||
"featurization_summary = pd.DataFrame.from_records(records)\n",
|
||||
"featurization_summary[\"Transformations\"].tolist()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Registering the best model\n",
|
||||
"We now register the best fitted model from the AutoML Run for use in future deployments. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Get results stats, extract the best model from AutoML run, download and register the resultant best model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"summary_df = get_result_df(automl_run)\n",
|
||||
"best_dnn_run_id = summary_df[\"run_id\"].iloc[0]\n",
|
||||
"best_dnn_run = Run(experiment, best_dnn_run_id)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model_dir = \"Model\" # Local folder where the model will be stored temporarily\n",
|
||||
"if not os.path.isdir(model_dir):\n",
|
||||
" os.mkdir(model_dir)\n",
|
||||
"\n",
|
||||
"best_dnn_run.download_file(\"outputs/model.pkl\", model_dir + \"/model.pkl\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Register the model in your Azure Machine Learning Workspace. If you previously registered a model, please make sure to delete it so as to replace it with this new model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Register the model\n",
|
||||
"model_name = \"textDNN-20News\"\n",
|
||||
"model = Model.register(\n",
|
||||
" model_path=model_dir + \"/model.pkl\", model_name=model_name, tags=None, workspace=ws\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Evaluate on Test Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We now use the best fitted model from the AutoML Run to make predictions on the test set. \n",
|
||||
"\n",
|
||||
"Test set schema should match that of the training set."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"test_dataset = Dataset.Tabular.from_delimited_files(\n",
|
||||
" path=[(datastore, blobstore_datadir + \"/test_data.csv\")]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# preview the first 3 rows of the dataset\n",
|
||||
"test_dataset.take(3).to_pandas_dataframe()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"test_experiment = Experiment(ws, experiment_name + \"_test\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"script_folder = os.path.join(os.getcwd(), \"inference\")\n",
|
||||
"os.makedirs(script_folder, exist_ok=True)\n",
|
||||
"shutil.copy(\"infer.py\", script_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"test_run = run_inference(\n",
|
||||
" test_experiment,\n",
|
||||
" compute_target,\n",
|
||||
" script_folder,\n",
|
||||
" best_dnn_run,\n",
|
||||
" test_dataset,\n",
|
||||
" target_column_name,\n",
|
||||
" model_name,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Display computed metrics"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"test_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"RunDetails(test_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"test_run.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pd.Series(test_run.get_metrics())"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "anshirga"
|
||||
}
|
||||
],
|
||||
"compute": [
|
||||
"AML Compute"
|
||||
],
|
||||
"datasets": [
|
||||
"None"
|
||||
],
|
||||
"deployment": [
|
||||
"None"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"framework": [
|
||||
"None"
|
||||
],
|
||||
"friendly_name": "DNN Text Featurization",
|
||||
"index_order": 2,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.7"
|
||||
},
|
||||
"tags": [
|
||||
"None"
|
||||
],
|
||||
"task": "Text featurization using DNNs for classification"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-classification-text-dnn
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,80 +0,0 @@
|
||||
import json
|
||||
import pandas as pd
|
||||
from azureml.core import Environment, ScriptRunConfig
|
||||
from azureml.core.run import Run
|
||||
|
||||
|
||||
def run_inference(
|
||||
test_experiment,
|
||||
compute_target,
|
||||
script_folder,
|
||||
train_run,
|
||||
test_dataset,
|
||||
target_column_name,
|
||||
model_name,
|
||||
):
|
||||
|
||||
try:
|
||||
inference_env = train_run.get_environment()
|
||||
except BaseException:
|
||||
run_details = train_run.get_details()
|
||||
run_def = run_details.get("runDefinition")
|
||||
env = run_def.get("environment")
|
||||
if env is None:
|
||||
raise
|
||||
json.dump(env, open("azureml_environment.json", "w"))
|
||||
inference_env = Environment.load_from_directory(".")
|
||||
|
||||
est = ScriptRunConfig(
|
||||
source_directory=script_folder,
|
||||
script="infer.py",
|
||||
arguments=[
|
||||
"--target_column_name",
|
||||
target_column_name,
|
||||
"--model_name",
|
||||
model_name,
|
||||
"--input-data",
|
||||
test_dataset.as_named_input("data"),
|
||||
],
|
||||
compute_target=compute_target,
|
||||
environment=inference_env,
|
||||
)
|
||||
|
||||
run = test_experiment.submit(
|
||||
est,
|
||||
tags={
|
||||
"training_run_id": train_run.id,
|
||||
"run_algorithm": train_run.properties["run_algorithm"],
|
||||
"valid_score": train_run.properties["score"],
|
||||
"primary_metric": train_run.properties["primary_metric"],
|
||||
},
|
||||
)
|
||||
|
||||
run.log("run_algorithm", run.tags["run_algorithm"])
|
||||
return run
|
||||
|
||||
|
||||
def get_result_df(remote_run):
|
||||
|
||||
children = list(remote_run.get_children(recursive=True))
|
||||
summary_df = pd.DataFrame(
|
||||
index=["run_id", "run_algorithm", "primary_metric", "Score"]
|
||||
)
|
||||
goal_minimize = False
|
||||
for run in children:
|
||||
if "run_algorithm" in run.properties and "score" in run.properties:
|
||||
summary_df[run.id] = [
|
||||
run.id,
|
||||
run.properties["run_algorithm"],
|
||||
run.properties["primary_metric"],
|
||||
float(run.properties["score"]),
|
||||
]
|
||||
if "goal" in run.properties:
|
||||
goal_minimize = run.properties["goal"].split("_")[-1] == "min"
|
||||
|
||||
summary_df = summary_df.T.sort_values(
|
||||
"Score", ascending=goal_minimize
|
||||
).drop_duplicates(["run_algorithm"])
|
||||
summary_df = summary_df.set_index("run_algorithm")
|
||||
|
||||
return summary_df
|
||||
@@ -1,70 +0,0 @@
|
||||
import argparse
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
|
||||
import joblib
|
||||
|
||||
from azureml.automl.runtime.shared.score import scoring, constants
|
||||
from azureml.core import Run, Dataset
|
||||
from azureml.core.model import Model
|
||||
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--target_column_name",
|
||||
type=str,
|
||||
dest="target_column_name",
|
||||
help="Target Column Name",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--model_name", type=str, dest="model_name", help="Name of registered model"
|
||||
)
|
||||
|
||||
parser.add_argument("--input-data", type=str, dest="input_data", help="Dataset")
|
||||
|
||||
args = parser.parse_args()
|
||||
target_column_name = args.target_column_name
|
||||
model_name = args.model_name
|
||||
|
||||
print("args passed are: ")
|
||||
print("Target column name: ", target_column_name)
|
||||
print("Name of registered model: ", model_name)
|
||||
|
||||
model_path = Model.get_model_path(model_name)
|
||||
# deserialize the model file back into a sklearn model
|
||||
model = joblib.load(model_path)
|
||||
|
||||
run = Run.get_context()
|
||||
|
||||
test_dataset = Dataset.get_by_id(run.experiment.workspace, id=args.input_data)
|
||||
|
||||
X_test_df = test_dataset.drop_columns(
|
||||
columns=[target_column_name]
|
||||
).to_pandas_dataframe()
|
||||
y_test_df = (
|
||||
test_dataset.with_timestamp_columns(None)
|
||||
.keep_columns(columns=[target_column_name])
|
||||
.to_pandas_dataframe()
|
||||
)
|
||||
|
||||
predicted = model.predict_proba(X_test_df)
|
||||
|
||||
if isinstance(predicted, pd.DataFrame):
|
||||
predicted = predicted.values
|
||||
|
||||
# Use the AutoML scoring module
|
||||
train_labels = model.classes_
|
||||
class_labels = np.unique(
|
||||
np.concatenate((y_test_df.values, np.reshape(train_labels, (-1, 1))))
|
||||
)
|
||||
classification_metrics = list(constants.CLASSIFICATION_SCALAR_SET)
|
||||
scores = scoring.score_classification(
|
||||
y_test_df.values, predicted, classification_metrics, class_labels, train_labels
|
||||
)
|
||||
|
||||
print("scores:")
|
||||
print(scores)
|
||||
|
||||
for key, value in scores.items():
|
||||
run.log(key, value)
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-continuous-retraining
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -36,7 +36,10 @@ except Exception:
|
||||
last_train_time = datetime.min.replace(tzinfo=pytz.UTC)
|
||||
|
||||
train_ds = Dataset.get_by_name(ws, args.ds_name)
|
||||
dataset_changed_time = train_ds.data_changed_time
|
||||
dataset_changed_time = train_ds.data_changed_time.replace(tzinfo=pytz.UTC)
|
||||
|
||||
print("dataset_changed_time=" + str(dataset_changed_time))
|
||||
print("last_train_time=" + str(last_train_time))
|
||||
|
||||
if not dataset_changed_time > last_train_time:
|
||||
print("Cancelling run since there is no new data.")
|
||||
|
||||
@@ -9,7 +9,7 @@ To run these notebook on your own notebook server, use these installation instru
|
||||
The instructions below will install everything you need and then start a Jupyter notebook.
|
||||
If you would like to use a lighter-weight version of the client that does not install all of the machine learning libraries locally, you can leverage the [experimental notebooks.](experimental/README.md)
|
||||
|
||||
### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose 64-bit Python 3.7 or higher.
|
||||
### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose 64-bit Python 3.8 or higher.
|
||||
- **Note**: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda.
|
||||
There's no need to install mini-conda specifically.
|
||||
|
||||
|
||||
@@ -97,7 +97,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.54.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.59.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
@@ -148,7 +148,7 @@
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# Choose a name for your CPU cluster\n",
|
||||
"cpu_cluster_name = \"cpu-cluster\"\n",
|
||||
"cpu_cluster_name = \"cpu-codegen\"\n",
|
||||
"\n",
|
||||
"# Verify that cluster does not exist already\n",
|
||||
"try:\n",
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
name: codegen-for-autofeaturization
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -97,7 +97,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.54.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.59.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
name: custom-model-training-from-autofeaturization-run
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,420 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Automated Machine Learning\n",
|
||||
"_**Classification of credit card fraudulent transactions on local managed compute **_\n",
|
||||
"\n",
|
||||
"## Contents\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Setup](#Setup)\n",
|
||||
"1. [Train](#Train)\n",
|
||||
"1. [Results](#Results)\n",
|
||||
"1. [Test](#Test)\n",
|
||||
"1. [Acknowledgements](#Acknowledgements)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Introduction\n",
|
||||
"\n",
|
||||
"In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.\n",
|
||||
"\n",
|
||||
"This notebook is using local managed compute to train the model.\n",
|
||||
"\n",
|
||||
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
|
||||
"\n",
|
||||
"In this notebook you will learn how to:\n",
|
||||
"1. Create an experiment using an existing workspace.\n",
|
||||
"2. Configure AutoML using `AutoMLConfig`.\n",
|
||||
"3. Train the model using local managed compute.\n",
|
||||
"4. Explore the results.\n",
|
||||
"5. Test the fitted model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import logging\n",
|
||||
"\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core.compute_target import LocalTarget\n",
|
||||
"from azureml.core.experiment import Experiment\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"from azureml.core.dataset import Dataset\n",
|
||||
"from azureml.train.automl import AutoMLConfig"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.54.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# choose a name for experiment\n",
|
||||
"experiment_name = 'automl-local-managed'\n",
|
||||
"\n",
|
||||
"experiment=Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
"output = {}\n",
|
||||
"output['Subscription ID'] = ws.subscription_id\n",
|
||||
"output['Workspace'] = ws.name\n",
|
||||
"output['Resource Group'] = ws.resource_group\n",
|
||||
"output['Location'] = ws.location\n",
|
||||
"output['Experiment Name'] = experiment.name\n",
|
||||
"pd.set_option('display.max_colwidth', None)\n",
|
||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Determine if local docker is configured for Linux images\n",
|
||||
"\n",
|
||||
"Local managed runs will leverage a Linux docker container to submit the run to. Due to this, the docker needs to be configured to use Linux containers."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check if Docker is installed and Linux containers are enabled\n",
|
||||
"import subprocess\n",
|
||||
"from subprocess import CalledProcessError\n",
|
||||
"try:\n",
|
||||
" assert subprocess.run(\"docker -v\", shell=True).returncode == 0, 'Local Managed runs require docker to be installed.'\n",
|
||||
" out = subprocess.check_output(\"docker system info\", shell=True).decode('ascii')\n",
|
||||
" assert \"OSType: linux\" in out, 'Docker engine needs to be configured to use Linux containers.' \\\n",
|
||||
" 'https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers'\n",
|
||||
"except CalledProcessError as ex:\n",
|
||||
" raise Exception('Local Managed runs require docker to be installed.') from ex"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load Data\n",
|
||||
"\n",
|
||||
"Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n",
|
||||
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
|
||||
"training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n",
|
||||
"label_column_name = 'Class'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train\n",
|
||||
"\n",
|
||||
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
|
||||
"\n",
|
||||
"|Property|Description|\n",
|
||||
"|-|-|\n",
|
||||
"|**task**|classification or regression|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
||||
"|**enable_early_stopping**|Stop the run if the metric score is not showing improvement.|\n",
|
||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
||||
"|**label_column_name**|The name of the label column.|\n",
|
||||
"|**enable_local_managed**|Enable the experimental local-managed scenario.|\n",
|
||||
"\n",
|
||||
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"automl_settings = {\n",
|
||||
" \"n_cross_validations\": 3,\n",
|
||||
" \"primary_metric\": 'average_precision_score_weighted',\n",
|
||||
" \"enable_early_stopping\": True,\n",
|
||||
" \"experiment_timeout_hours\": 0.3, #for real scenarios we recommend a timeout of at least one hour \n",
|
||||
" \"verbosity\": logging.INFO,\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||
" debug_log = 'automl_errors.log',\n",
|
||||
" compute_target = LocalTarget(),\n",
|
||||
" enable_local_managed = True,\n",
|
||||
" training_data = training_data,\n",
|
||||
" label_column_name = label_column_name,\n",
|
||||
" **automl_settings\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"parent_run = experiment.submit(automl_config, show_output = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# If you need to retrieve a run that already started, use the following code\n",
|
||||
"#from azureml.train.automl.run import AutoMLRun\n",
|
||||
"#parent_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"parent_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Results"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Explain model\n",
|
||||
"\n",
|
||||
"Automated ML models can be explained and visualized using the SDK Explainability library. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Analyze results\n",
|
||||
"\n",
|
||||
"### Retrieve the Best Child Run\n",
|
||||
"\n",
|
||||
"Below we select the best pipeline from our iterations. The `get_best_child` method returns the best run. Overloads on `get_best_child` allow you to retrieve the best run for *any* logged metric."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"best_run = parent_run.get_best_child()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test the fitted model\n",
|
||||
"\n",
|
||||
"Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"X_test_df = validation_data.drop_columns(columns=[label_column_name])\n",
|
||||
"y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Creating ModelProxy for submitting prediction runs to the training environment.\n",
|
||||
"We will create a ModelProxy for the best child run, which will allow us to submit a run that does the prediction in the training environment. Unlike the local client, which can have different versions of some libraries, the training environment will have all the compatible libraries for the model already."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.train.automl.model_proxy import ModelProxy\n",
|
||||
"best_model_proxy = ModelProxy(best_run)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# call the predict functions on the model proxy\n",
|
||||
"y_pred = best_model_proxy.predict(X_test_df).to_pandas_dataframe()\n",
|
||||
"y_pred"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Acknowledgements"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This Credit Card fraud Detection dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/ and is available at: https://www.kaggle.com/mlg-ulb/creditcardfraud\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Universit\u00c3\u0192\u00c2\u00a9 Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on https://www.researchgate.net and the page of the DefeatFraud project\n",
|
||||
"Please cite the following works: \n",
|
||||
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tAndrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n",
|
||||
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tDal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon\n",
|
||||
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tDal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE\n",
|
||||
"o\tDal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by G. Bontempi)\n",
|
||||
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tCarcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-A\u00c3\u0192\u00c2\u00abl; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier\n",
|
||||
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tCarcillo, Fabrizio; Le Borgne, Yann-A\u00c3\u0192\u00c2\u00abl; Caelen, Olivier; Bontempi, Gianluca. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "sekrupa"
|
||||
}
|
||||
],
|
||||
"category": "tutorial",
|
||||
"compute": [
|
||||
"AML Compute"
|
||||
],
|
||||
"datasets": [
|
||||
"Creditcard"
|
||||
],
|
||||
"deployment": [
|
||||
"None"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"file_extension": ".py",
|
||||
"framework": [
|
||||
"None"
|
||||
],
|
||||
"friendly_name": "Classification of credit card fraudulent transactions using Automated ML",
|
||||
"index_order": 5,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.7"
|
||||
},
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"tags": [
|
||||
"AutomatedML"
|
||||
],
|
||||
"task": "Classification",
|
||||
"version": "3.6.7"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-classification-credit-card-fraud-local-managed
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -91,7 +91,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(\"This notebook was created using version 1.54.0 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.59.0 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-regression-model-proxy
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-backtest-many-models
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-backtest-single-model
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-bike-share
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-energy-demand
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-function
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-github-dau
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-hierarchical-timeseries
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -366,7 +366,7 @@
|
||||
"USE_CURATED_ENV = True\n",
|
||||
"if USE_CURATED_ENV:\n",
|
||||
" curated_environment = Environment.get(\n",
|
||||
" workspace=ws, name=\"AzureML-sklearn-0.24-ubuntu18.04-py37-cpu\"\n",
|
||||
" workspace=ws, name=\"AzureML-sklearn-1.5\"\n",
|
||||
" )\n",
|
||||
" aml_run_config.environment = curated_environment\n",
|
||||
"else:\n",
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-many-models
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-orange-juice-sales
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-pipelines
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-univariate-recipe-experiment-settings
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-forecasting-univariate-recipe-run-experiment
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -570,273 +570,6 @@
|
||||
"automl_run.upload_file(\"outputs/scoring_explainer.pkl\", scoring_explainer_file_name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)\n",
|
||||
"\n",
|
||||
"We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Register trained automl model present in the 'outputs' folder in the artifacts\n",
|
||||
"original_model = automl_run.register_model(\n",
|
||||
" model_name=\"automl_model\", model_path=\"outputs/model.pkl\"\n",
|
||||
")\n",
|
||||
"scoring_explainer_model = automl_run.register_model(\n",
|
||||
" model_name=\"scoring_explainer\", model_path=\"outputs/scoring_explainer.pkl\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Create the conda dependencies for setting up the service\n",
|
||||
"\n",
|
||||
"We need to download the conda dependencies using the automl_run object."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.automl.core.shared import constants\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"\n",
|
||||
"automl_run.download_file(constants.CONDA_ENV_FILE_PATH, \"myenv.yml\")\n",
|
||||
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
|
||||
"myenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Write the Entry Script\n",
|
||||
"Write the script that will be used to predict on your model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import joblib\n",
|
||||
"import pandas as pd\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"from azureml.train.automl.runtime.automl_explain_utilities import (\n",
|
||||
" automl_setup_model_explanations,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global automl_model\n",
|
||||
" global scoring_explainer\n",
|
||||
"\n",
|
||||
" # Retrieve the path to the model file using the model name\n",
|
||||
" # Assume original model is named original_prediction_model\n",
|
||||
" automl_model_path = Model.get_model_path(\"automl_model\")\n",
|
||||
" scoring_explainer_path = Model.get_model_path(\"scoring_explainer\")\n",
|
||||
"\n",
|
||||
" automl_model = joblib.load(automl_model_path)\n",
|
||||
" scoring_explainer = joblib.load(scoring_explainer_path)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def run(raw_data):\n",
|
||||
" data = pd.read_json(raw_data, orient=\"records\")\n",
|
||||
" # Make prediction\n",
|
||||
" predictions = automl_model.predict(data)\n",
|
||||
" # Setup for inferencing explanations\n",
|
||||
" automl_explainer_setup_obj = automl_setup_model_explanations(\n",
|
||||
" automl_model, X_test=data, task=\"classification\"\n",
|
||||
" )\n",
|
||||
" # Retrieve model explanations for engineered explanations\n",
|
||||
" engineered_local_importance_values = scoring_explainer.explain(\n",
|
||||
" automl_explainer_setup_obj.X_test_transform\n",
|
||||
" )\n",
|
||||
" # Retrieve model explanations for raw explanations\n",
|
||||
" raw_local_importance_values = scoring_explainer.explain(\n",
|
||||
" automl_explainer_setup_obj.X_test_transform, get_raw=True\n",
|
||||
" )\n",
|
||||
" # You can return any data type as long as it is JSON-serializable\n",
|
||||
" return {\n",
|
||||
" \"predictions\": predictions.tolist(),\n",
|
||||
" \"engineered_local_importance_values\": engineered_local_importance_values,\n",
|
||||
" \"raw_local_importance_values\": raw_local_importance_values,\n",
|
||||
" }"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Create the InferenceConfig \n",
|
||||
"Create the inference config that will be used when deploying the model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"inf_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Provision the AKS Cluster\n",
|
||||
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, AksCompute\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# Choose a name for your cluster.\n",
|
||||
"aks_name = \"scoring-explain\"\n",
|
||||
"\n",
|
||||
"# Verify that cluster does not exist already\n",
|
||||
"try:\n",
|
||||
" aks_target = ComputeTarget(workspace=ws, name=aks_name)\n",
|
||||
" print(\"Found existing cluster, use it.\")\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" prov_config = AksCompute.provisioning_configuration(vm_size=\"STANDARD_D3_V2\")\n",
|
||||
" aks_target = ComputeTarget.create(\n",
|
||||
" workspace=ws, name=aks_name, provisioning_configuration=prov_config\n",
|
||||
" )\n",
|
||||
"aks_target.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Deploy web service to AKS"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set the web service configuration (using default here)\n",
|
||||
"from azureml.core.webservice import AksWebservice\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"aks_config = AksWebservice.deploy_configuration()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aks_service_name = \"model-scoring-local-aks\"\n",
|
||||
"\n",
|
||||
"aks_service = Model.deploy(\n",
|
||||
" workspace=ws,\n",
|
||||
" name=aks_service_name,\n",
|
||||
" models=[scoring_explainer_model, original_model],\n",
|
||||
" inference_config=inf_config,\n",
|
||||
" deployment_config=aks_config,\n",
|
||||
" deployment_target=aks_target,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"aks_service.wait_for_deployment(show_output=True)\n",
|
||||
"print(aks_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### View the service logs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aks_service.get_logs()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Consume the web service using run method to do the scoring and explanation of scoring.\n",
|
||||
"We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Serialize the first row of the test data into json\n",
|
||||
"X_test_json = X_test_df[:1].to_json(orient=\"records\")\n",
|
||||
"print(X_test_json)\n",
|
||||
"\n",
|
||||
"# Call the service to get the predictions and the engineered and raw explanations\n",
|
||||
"output = aks_service.run(X_test_json)\n",
|
||||
"\n",
|
||||
"# Print the predicted value\n",
|
||||
"print(\"predictions:\\n{}\\n\".format(output[\"predictions\"]))\n",
|
||||
"# Print the engineered feature importances for the predicted value\n",
|
||||
"print(\n",
|
||||
" \"engineered_local_importance_values:\\n{}\\n\".format(\n",
|
||||
" output[\"engineered_local_importance_values\"]\n",
|
||||
" )\n",
|
||||
")\n",
|
||||
"# Print the raw feature importances for the predicted value\n",
|
||||
"print(\n",
|
||||
" \"raw_local_importance_values:\\n{}\\n\".format(output[\"raw_local_importance_values\"])\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Clean up\n",
|
||||
"Delete the service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aks_service.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-classification-credit-card-fraud-local
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-regression-explanation-featurization
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,4 +0,0 @@
|
||||
name: auto-ml-regression
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,217 +0,0 @@
|
||||
|
||||
NOTICES AND INFORMATION
|
||||
Do Not Translate or Localize
|
||||
|
||||
This Azure Machine Learning service example notebooks repository includes material from the projects listed below.
|
||||
|
||||
|
||||
1. SSD-Tensorflow (https://github.com/balancap/ssd-tensorflow)
|
||||
|
||||
|
||||
%% SSD-Tensorflow NOTICES AND INFORMATION BEGIN HERE
|
||||
=========================================
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
=========================================
|
||||
END OF SSD-Tensorflow NOTICES AND INFORMATION
|
||||
@@ -1,104 +0,0 @@
|
||||
|
||||
# Notebooks for Microsoft Azure Machine Learning Hardware Accelerated Models SDK
|
||||
|
||||
Easily create and train a model using various deep neural networks (DNNs) as a featurizer for deployment to Azure or a Data Box Edge device for ultra-low latency inferencing using FPGA's. These models are currently available:
|
||||
|
||||
* ResNet 50
|
||||
* ResNet 152
|
||||
* DenseNet-121
|
||||
* VGG-16
|
||||
* SSD-VGG
|
||||
|
||||
To learn more about the azureml-accel-model classes, see the section [Model Classes](#model-classes) below or the [Azure ML Accel Models SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-accel-models/azureml.accel?view=azure-ml-py).
|
||||
|
||||
### Step 1: Create an Azure ML workspace
|
||||
Follow [these instructions](https://docs.microsoft.com/en-us/azure/machine-learning/service/setup-create-workspace) to install the Azure ML SDK on your local machine, create an Azure ML workspace, and set up your notebook environment, which is required for the next step.
|
||||
|
||||
### Step 2: Check your FPGA quota
|
||||
Use the Azure CLI to check whether you have quota.
|
||||
|
||||
```shell
|
||||
az vm list-usage --location "eastus" -o table
|
||||
```
|
||||
|
||||
The other locations are ``southeastasia``, ``westeurope``, and ``westus2``.
|
||||
|
||||
Under the "Name" column, look for "Standard PBS Family vCPUs" and ensure you have at least 6 vCPUs under "CurrentValue."
|
||||
|
||||
If you do not have quota, then submit a request form [here](https://aka.ms/accelerateAI).
|
||||
|
||||
### Step 3: Install the Azure ML Accelerated Models SDK
|
||||
Once you have set up your environment, install the Azure ML Accel Models SDK. This package requires tensorflow >= 1.6,<2.0 to be installed.
|
||||
|
||||
If you already have tensorflow >= 1.6,<2.0 installed in your development environment, you can install the SDK package using:
|
||||
|
||||
```
|
||||
pip install azureml-accel-models
|
||||
```
|
||||
|
||||
If you do not have tensorflow >= 1.6,<2.0 and are using a CPU-only development environment, our SDK with tensorflow can be installed using:
|
||||
|
||||
```
|
||||
pip install azureml-accel-models[cpu]
|
||||
```
|
||||
|
||||
If your machine supports GPU (for example, on an [Azure DSVM](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/overview)), then you can leverage the tensorflow-gpu functionality using:
|
||||
|
||||
```
|
||||
pip install azureml-accel-models[gpu]
|
||||
```
|
||||
|
||||
### Step 4: Follow our notebooks
|
||||
|
||||
We provide notebooks to walk through the following scenarios, linked below:
|
||||
* [Quickstart](https://github.com/Azure/MachineLearningNotebooks/blob/33d6def8c30d3dd3a5bfbea50b9c727788185faf/how-to-use-azureml/deployment/accelerated-models/accelerated-models-quickstart.ipynb), deploy and inference a ResNet50 model trained on ImageNet
|
||||
* [Object Detection](https://github.com/Azure/MachineLearningNotebooks/blob/33d6def8c30d3dd3a5bfbea50b9c727788185faf/how-to-use-azureml/deployment/accelerated-models/accelerated-models-object-detection.ipynb), deploy and inference an SSD-VGG model that can do object detection
|
||||
* [Training models](https://github.com/Azure/MachineLearningNotebooks/blob/33d6def8c30d3dd3a5bfbea50b9c727788185faf/how-to-use-azureml/deployment/accelerated-models/accelerated-models-training.ipynb), train one of our accelerated models on the Kaggle Cats and Dogs dataset to see how to improve accuracy on custom datasets
|
||||
|
||||
**Note**: the above notebooks work only for tensorflow >= 1.6,<2.0.
|
||||
|
||||
<a name="model-classes"></a>
|
||||
## Model Classes
|
||||
As stated above, we support 5 Accelerated Models. Here's more information on their input and output tensors.
|
||||
|
||||
**Available models and output tensors**
|
||||
|
||||
The available models and the corresponding default classifier output tensors are below. This is the value that you would use during inferencing if you used the default classifier.
|
||||
* Resnet50, QuantizedResnet50
|
||||
``
|
||||
output_tensors = "classifier_1/resnet_v1_50/predictions/Softmax:0"
|
||||
``
|
||||
* Resnet152, QuantizedResnet152
|
||||
``
|
||||
output_tensors = "classifier/resnet_v1_152/predictions/Softmax:0"
|
||||
``
|
||||
* Densenet121, QuantizedDensenet121
|
||||
``
|
||||
output_tensors = "classifier/densenet121/predictions/Softmax:0"
|
||||
``
|
||||
* Vgg16, QuantizedVgg16
|
||||
``
|
||||
output_tensors = "classifier/vgg_16/fc8/squeezed:0"
|
||||
``
|
||||
* SsdVgg, QuantizedSsdVgg
|
||||
``
|
||||
output_tensors = ['ssd_300_vgg/block4_box/Reshape_1:0', 'ssd_300_vgg/block7_box/Reshape_1:0', 'ssd_300_vgg/block8_box/Reshape_1:0', 'ssd_300_vgg/block9_box/Reshape_1:0', 'ssd_300_vgg/block10_box/Reshape_1:0', 'ssd_300_vgg/block11_box/Reshape_1:0', 'ssd_300_vgg/block4_box/Reshape:0', 'ssd_300_vgg/block7_box/Reshape:0', 'ssd_300_vgg/block8_box/Reshape:0', 'ssd_300_vgg/block9_box/Reshape:0', 'ssd_300_vgg/block10_box/Reshape:0', 'ssd_300_vgg/block11_box/Reshape:0']
|
||||
``
|
||||
|
||||
For more information, please reference the azureml.accel.models package in the [Azure ML Python SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-accel-models/azureml.accel.models?view=azure-ml-py).
|
||||
|
||||
**Input tensors**
|
||||
|
||||
The input_tensors value defaults to "Placeholder:0" and is created in the [Image Preprocessing](#construct-model) step in the line:
|
||||
``
|
||||
in_images = tf.placeholder(tf.string)
|
||||
``
|
||||
|
||||
You can change the input_tensors name by doing this:
|
||||
``
|
||||
in_images = tf.placeholder(tf.string, name="images")
|
||||
``
|
||||
|
||||
|
||||
## Resources
|
||||
* [Read more about FPGAs](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-accelerate-with-fpgas)
|
||||
@@ -1,14 +0,0 @@
|
||||
# Model Deployment with Azure ML service
|
||||
You can use Azure Machine Learning to package, debug, validate and deploy inference containers to a variety of compute targets. This process is known as "MLOps" (ML operationalization).
|
||||
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where
|
||||
|
||||
## Get Started
|
||||
To begin, you will need an ML workspace.
|
||||
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-workspace
|
||||
|
||||
## Deploy to the cloud
|
||||
You can deploy to the cloud using the Azure ML CLI or the Azure ML SDK.
|
||||
- CLI example: https://aka.ms/azmlcli
|
||||
- Notebook example: [model-register-and-deploy](./model-register-and-deploy.ipynb).
|
||||
|
||||

|
||||
@@ -1,395 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Deploy Multiple Models as Webservice\n",
|
||||
"\n",
|
||||
"This example shows how to deploy a Webservice with multiple models in step-by-step fashion:\n",
|
||||
"\n",
|
||||
" 1. Register Models\n",
|
||||
" 2. Deploy Models as Webservice"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize Workspace\n",
|
||||
"\n",
|
||||
"Initialize a workspace object from persisted configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"create workspace"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Register Models"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In this example, we will be using and registering two models. \n",
|
||||
"\n",
|
||||
"First we will train two simple models on the [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset) included with scikit-learn, serializing them to files in the current directory."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"import sklearn\n",
|
||||
"\n",
|
||||
"from sklearn.datasets import load_diabetes\n",
|
||||
"from sklearn.linear_model import BayesianRidge, Ridge\n",
|
||||
"\n",
|
||||
"x, y = load_diabetes(return_X_y=True)\n",
|
||||
"\n",
|
||||
"first_model = Ridge().fit(x, y)\n",
|
||||
"second_model = BayesianRidge().fit(x, y)\n",
|
||||
"\n",
|
||||
"joblib.dump(first_model, \"first_model.pkl\")\n",
|
||||
"joblib.dump(second_model, \"second_model.pkl\")\n",
|
||||
"\n",
|
||||
"print(\"Trained models using scikit-learn {}.\".format(sklearn.__version__))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that we have our trained models locally, we will register them as Models with the names `my_first_model` and `my_second_model` in the workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"register model from file"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"my_model_1 = Model.register(model_path=\"first_model.pkl\",\n",
|
||||
" model_name=\"my_first_model\",\n",
|
||||
" workspace=ws)\n",
|
||||
"\n",
|
||||
"my_model_2 = Model.register(model_path=\"second_model.pkl\",\n",
|
||||
" model_name=\"my_second_model\",\n",
|
||||
" workspace=ws)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Write the Entry Script\n",
|
||||
"Write the script that will be used to predict on your models"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Model.get_model_path()\n",
|
||||
"\n",
|
||||
"To get the paths of your models, use `Model.get_model_path(model_name, version=None, _workspace=None)` method. This method will find the path to a model using the name of the model registered under the workspace.\n",
|
||||
"\n",
|
||||
"In this example, we do not use the optional arguments `version` and `_workspace`.\n",
|
||||
"\n",
|
||||
"#### Using environment variable AZUREML_MODEL_DIR\n",
|
||||
"\n",
|
||||
"In other [examples](../deploy-to-cloud/score.py) with a single model deployment, we use the environment variable `AZUREML_MODEL_DIR` and model file name to get the model path. \n",
|
||||
"\n",
|
||||
"For single model deployments, this environment variable is the path to the model folder (`./azureml-models/$MODEL_NAME/$VERSION`). When we deploy multiple models, the environment variable is set to the folder containing all models (./azureml-models).\n",
|
||||
"\n",
|
||||
"If you're using multiple models and you know the versions of the models you deploy, you can use this method to get the model path:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"# Construct the model path using the registered model name, version, and model file name\n",
|
||||
"model_1_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'my_first_model', '1', 'first_model.pkl')\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model_1, model_2\n",
|
||||
" # Here \"my_first_model\" is the name of the model registered under the workspace.\n",
|
||||
" # This call will return the path to the .pkl file on the local disk.\n",
|
||||
" model_1_path = Model.get_model_path(model_name='my_first_model')\n",
|
||||
" model_2_path = Model.get_model_path(model_name='my_second_model')\n",
|
||||
" \n",
|
||||
" # Deserialize the model files back into scikit-learn models.\n",
|
||||
" model_1 = joblib.load(model_1_path)\n",
|
||||
" model_2 = joblib.load(model_2_path)\n",
|
||||
"\n",
|
||||
"# Note you can pass in multiple rows for scoring.\n",
|
||||
"def run(raw_data):\n",
|
||||
" try:\n",
|
||||
" data = json.loads(raw_data)['data']\n",
|
||||
" data = np.array(data)\n",
|
||||
" \n",
|
||||
" # Call predict() on each model\n",
|
||||
" result_1 = model_1.predict(data)\n",
|
||||
" result_2 = model_2.predict(data)\n",
|
||||
"\n",
|
||||
" # You can return any JSON-serializable value.\n",
|
||||
" return {\"prediction1\": result_1.tolist(), \"prediction2\": result_2.tolist()}\n",
|
||||
" except Exception as e:\n",
|
||||
" result = str(e)\n",
|
||||
" return result"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create Environment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can now create and/or use an Environment object when deploying a Webservice. The Environment can have been previously registered with your Workspace, or it will be registered with it as a part of the Webservice deployment. Please note that your environment must include azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.\n",
|
||||
"\n",
|
||||
"More information can be found in our [using environments notebook](../training/using-environments/using-environments.ipynb)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Environment\n",
|
||||
"\n",
|
||||
"env = Environment(\"deploytocloudenv\")\n",
|
||||
"env.python.conda_dependencies.add_pip_package(\"joblib\")\n",
|
||||
"env.python.conda_dependencies.add_pip_package(\"numpy==1.23\")\n",
|
||||
"env.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create Inference Configuration\n",
|
||||
"\n",
|
||||
"There is now support for a source directory, you can upload an entire folder from your local machine as dependencies for the Webservice.\n",
|
||||
"Note: in that case, environments's entry_script and file_path are relative paths to the source_directory path; myenv.docker.base_dockerfile is a string containing extra docker steps or contents of the docker file.\n",
|
||||
"\n",
|
||||
"Sample code for using a source directory:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"myenv = Environment.from_conda_specification(name='myenv', file_path='env/myenv.yml')\n",
|
||||
"\n",
|
||||
"# explicitly set base_image to None when setting base_dockerfile\n",
|
||||
"myenv.docker.base_image = None\n",
|
||||
"# add extra docker commends to execute\n",
|
||||
"myenv.docker.base_dockerfile = \"FROM ubuntu\\n RUN echo \\\"hello\\\"\"\n",
|
||||
"\n",
|
||||
"inference_config = InferenceConfig(source_directory=\"C:/abc\",\n",
|
||||
" entry_script=\"x/y/score.py\",\n",
|
||||
" environment=myenv)\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
" - file_path: input parameter to Environment constructor. Manages conda and python package dependencies.\n",
|
||||
" - env.docker.base_dockerfile: any extra steps you want to inject into docker file\n",
|
||||
" - source_directory: holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder\n",
|
||||
" - entry_script: contains logic specific to initializing your model and running predictions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"create image"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=env)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deploy Model as Webservice on Azure Container Instance\n",
|
||||
"\n",
|
||||
"Note that the service creation can take few minutes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"azuremlexception-remarks-sample"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import AciWebservice\n",
|
||||
"\n",
|
||||
"aci_service_name = \"aciservice-multimodel\"\n",
|
||||
"\n",
|
||||
"deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
|
||||
"\n",
|
||||
"service = Model.deploy(ws, aci_service_name, [my_model_1, my_model_2], inference_config, deployment_config, overwrite=True)\n",
|
||||
"service.wait_for_deployment(True)\n",
|
||||
"\n",
|
||||
"print(service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Test web service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"test_sample = json.dumps({'data': x[0:2].tolist()})\n",
|
||||
"\n",
|
||||
"prediction = service.run(test_sample)\n",
|
||||
"\n",
|
||||
"print(prediction)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Delete ACI to clean up"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"deploy service",
|
||||
"aci"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"service.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "jenns"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.8"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,6 +0,0 @@
|
||||
name: multi-model-register-and-deploy
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- numpy
|
||||
- scikit-learn
|
||||
@@ -1,12 +0,0 @@
|
||||
# Model Deployment with Azure ML service
|
||||
You can use Azure Machine Learning to package, debug, validate and deploy inference containers to a variety of compute targets. This process is known as "MLOps" (ML operationalization).
|
||||
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where
|
||||
|
||||
## Get Started
|
||||
To begin, you will need an ML workspace.
|
||||
For more information please check out this article: https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-workspace
|
||||
|
||||
## Deploy locally
|
||||
You can deploy a model locally for testing & debugging using the Azure ML CLI or the Azure ML SDK.
|
||||
- CLI example: https://aka.ms/azmlcli
|
||||
- Notebook example: [register-model-deploy-local](./register-model-deploy-local.ipynb).
|
||||
@@ -1 +0,0 @@
|
||||
RUN echo "this is test"
|
||||
@@ -1,495 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Register model and deploy locally with advanced usages\n",
|
||||
"\n",
|
||||
"This example shows how to deploy a web service in step-by-step fashion:\n",
|
||||
"\n",
|
||||
" 1. Register model\n",
|
||||
" 2. Deploy the image as a web service in a local Docker container.\n",
|
||||
" 3. Quickly test changes to your entry script by reloading the local service.\n",
|
||||
" 4. Optionally, you can also make changes to model, conda or extra_docker_file_steps and update local service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize Workspace\n",
|
||||
"\n",
|
||||
"Initialize a workspace object from persisted configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"create workspace"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create trained model\n",
|
||||
"\n",
|
||||
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset). "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"\n",
|
||||
"from sklearn.datasets import load_diabetes\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"\n",
|
||||
"dataset_x, dataset_y = load_diabetes(return_X_y=True)\n",
|
||||
"\n",
|
||||
"sk_model = Ridge().fit(dataset_x, dataset_y)\n",
|
||||
"\n",
|
||||
"joblib.dump(sk_model, \"sklearn_regression_model.pkl\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Register Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can add tags and descriptions to your models. we are using `sklearn_regression_model.pkl` file in the current directory as a model with the name `sklearn_regression_model` in the workspace.\n",
|
||||
"\n",
|
||||
"Using tags, you can track useful information such as the name and version of the machine learning library used to train the model, framework, category, target customer etc. Note that tags must be alphanumeric."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"register model from file",
|
||||
"sample-model-register"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"model = Model.register(model_path=\"sklearn_regression_model.pkl\",\n",
|
||||
" model_name=\"sklearn_regression_model\",\n",
|
||||
" tags={'area': \"diabetes\", 'type': \"regression\"},\n",
|
||||
" description=\"Ridge regression model to predict diabetes\",\n",
|
||||
" workspace=ws)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Manage your dependencies in a folder"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"source_directory = \"source_directory\"\n",
|
||||
"\n",
|
||||
"os.makedirs(source_directory, exist_ok=True)\n",
|
||||
"os.makedirs(os.path.join(source_directory, \"x/y\"), exist_ok=True)\n",
|
||||
"os.makedirs(os.path.join(source_directory, \"env\"), exist_ok=True)\n",
|
||||
"os.makedirs(os.path.join(source_directory, \"dockerstep\"), exist_ok=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Show `score.py`. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile source_directory/x/y/score.py\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from inference_schema.schema_decorators import input_schema, output_schema\n",
|
||||
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment. Join this path with the filename of the model file.\n",
|
||||
" # It holds the path to the directory that contains the deployed model (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # If there are multiple models, this value is the path to the directory containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # Deserialize the model file back into a sklearn model.\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
" global name\n",
|
||||
" # Note here, the entire source directory from inference config gets added into image.\n",
|
||||
" # Below is an example of how you can use any extra files in image.\n",
|
||||
" with open('./source_directory/extradata.json') as json_file:\n",
|
||||
" data = json.load(json_file)\n",
|
||||
" name = data[\"people\"][0][\"name\"]\n",
|
||||
"\n",
|
||||
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
|
||||
"output_sample = np.array([3726.995])\n",
|
||||
"\n",
|
||||
"@input_schema('data', NumpyParameterType(input_sample))\n",
|
||||
"@output_schema(NumpyParameterType(output_sample))\n",
|
||||
"def run(data):\n",
|
||||
" try:\n",
|
||||
" result = model.predict(data)\n",
|
||||
" # You can return any JSON-serializable object.\n",
|
||||
" return \"Hello \" + name + \" here is your result = \" + str(result)\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile source_directory/extradata.json\n",
|
||||
"{\n",
|
||||
" \"people\": [\n",
|
||||
" {\n",
|
||||
" \"website\": \"microsoft.com\", \n",
|
||||
" \"from\": \"Seattle\", \n",
|
||||
" \"name\": \"Mrudula\"\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create Inference Configuration\n",
|
||||
"\n",
|
||||
" - file_path: input parameter to Environment constructor. Manages conda and python package dependencies.\n",
|
||||
" - env.docker.base_dockerfile: any extra steps you want to inject into docker file\n",
|
||||
" - source_directory: holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder\n",
|
||||
" - entry_script: contains logic specific to initializing your model and running predictions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import sklearn\n",
|
||||
"\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"myenv = Environment('myenv')\n",
|
||||
"myenv.python.conda_dependencies.add_pip_package(\"inference-schema[numpy-support]\")\n",
|
||||
"myenv.python.conda_dependencies.add_pip_package(\"joblib\")\n",
|
||||
"myenv.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))\n",
|
||||
"\n",
|
||||
"# explicitly set base_image to None when setting base_dockerfile\n",
|
||||
"myenv.docker.base_image = None\n",
|
||||
"myenv.docker.base_dockerfile = \"FROM mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04\\nRUN echo \\\"this is test\\\"\"\n",
|
||||
"myenv.inferencing_stack_version = \"latest\"\n",
|
||||
"\n",
|
||||
"inference_config = InferenceConfig(source_directory=source_directory,\n",
|
||||
" entry_script=\"x/y/score.py\",\n",
|
||||
" environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy Model as a Local Docker Web Service\n",
|
||||
"\n",
|
||||
"*Make sure you have Docker installed and running.*\n",
|
||||
"\n",
|
||||
"Note that the service creation can take few minutes.\n",
|
||||
"\n",
|
||||
"NOTE:\n",
|
||||
"\n",
|
||||
"The Docker image runs as a Linux container. If you are running Docker for Windows, you need to ensure the Linux Engine is running:\n",
|
||||
"\n",
|
||||
" # PowerShell command to switch to Linux engine\n",
|
||||
" & 'C:\\Program Files\\Docker\\Docker\\DockerCli.exe' -SwitchLinuxEngine"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"deploy service",
|
||||
"aci"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import LocalWebservice\n",
|
||||
"\n",
|
||||
"# This is optional, if not provided Docker will choose a random unused port.\n",
|
||||
"deployment_config = LocalWebservice.deploy_configuration(port=6789)\n",
|
||||
"\n",
|
||||
"local_service = Model.deploy(ws, \"test\", [model], inference_config, deployment_config)\n",
|
||||
"\n",
|
||||
"local_service.wait_for_deployment()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print('Local service port: {}'.format(local_service.port))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Check Status and Get Container Logs\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(local_service.get_logs())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test Web Service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Call the web service with some input data to get a prediction."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"sample_input = json.dumps({\n",
|
||||
" 'data': dataset_x[0:2].tolist()\n",
|
||||
"})\n",
|
||||
"\n",
|
||||
"print(local_service.run(sample_input))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Reload Service\n",
|
||||
"\n",
|
||||
"You can update your score.py file and then call `reload()` to quickly restart the service. This will only reload your execution script and dependency files, it will not rebuild the underlying Docker image. As a result, `reload()` is fast, but if you do need to rebuild the image -- to add a new Conda or pip package, for instance -- you will have to call `update()`, instead (see below)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile source_directory/x/y/score.py\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from inference_schema.schema_decorators import input_schema, output_schema\n",
|
||||
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # Deserialize the model file back into a sklearn model.\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
" global name, from_location\n",
|
||||
" # Note here, the entire source directory from inference config gets added into image.\n",
|
||||
" # Below is an example of how you can use any extra files in image.\n",
|
||||
" with open('source_directory/extradata.json') as json_file: \n",
|
||||
" data = json.load(json_file)\n",
|
||||
" name = data[\"people\"][0][\"name\"]\n",
|
||||
" from_location = data[\"people\"][0][\"from\"]\n",
|
||||
"\n",
|
||||
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
|
||||
"output_sample = np.array([3726.995])\n",
|
||||
"\n",
|
||||
"@input_schema('data', NumpyParameterType(input_sample))\n",
|
||||
"@output_schema(NumpyParameterType(output_sample))\n",
|
||||
"def run(data):\n",
|
||||
" try:\n",
|
||||
" result = model.predict(data)\n",
|
||||
" # You can return any JSON-serializable object.\n",
|
||||
" return \"Hello \" + name + \" from \" + from_location + \" here is your result = \" + str(result)\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"local_service.reload()\n",
|
||||
"print(\"--------------------------------------------------------------\")\n",
|
||||
"\n",
|
||||
"# After calling reload(), run() will return the updated message.\n",
|
||||
"local_service.run(sample_input)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Update Service\n",
|
||||
"\n",
|
||||
"If you want to change your model(s), Conda dependencies, or deployment configuration, call `update()` to rebuild the Docker image.\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"\n",
|
||||
"local_service.update(models=[SomeOtherModelObject],\n",
|
||||
" deployment_config=local_config,\n",
|
||||
" inference_config=inference_config)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Delete Service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"local_service.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "keriehm"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.8"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,556 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Register model and deploy locally\n",
|
||||
"\n",
|
||||
"This example shows how to deploy a web service in step-by-step fashion:\n",
|
||||
"\n",
|
||||
" 1. Register model\n",
|
||||
" 2. Deploy the image as a web service in a local Docker container.\n",
|
||||
" 3. Quickly test changes to your entry script by reloading the local service.\n",
|
||||
" 4. Optionally, you can also make changes to model, conda or extra_docker_file_steps and update local service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize Workspace\n",
|
||||
"\n",
|
||||
"Initialize a workspace object from persisted configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create trained model\n",
|
||||
"\n",
|
||||
"For this example, we will train a small model on scikit-learn's [diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.html#diabetes-dataset). "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import joblib\n",
|
||||
"\n",
|
||||
"from sklearn.datasets import load_diabetes\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"\n",
|
||||
"dataset_x, dataset_y = load_diabetes(return_X_y=True)\n",
|
||||
"\n",
|
||||
"sk_model = Ridge().fit(dataset_x, dataset_y)\n",
|
||||
"\n",
|
||||
"joblib.dump(sk_model, \"sklearn_regression_model.pkl\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Register Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Here we are registering the serialized file `sklearn_regression_model.pkl` in the current directory as a model with the name `sklearn_regression_model` in the workspace.\n",
|
||||
"\n",
|
||||
"You can add tags and descriptions to your models. Using tags, you can track useful information such as the name and version of the machine learning library used to train the model, framework, category, target customer etc. Note that tags must be alphanumeric."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"register model from file"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"model = Model.register(model_path=\"sklearn_regression_model.pkl\",\n",
|
||||
" model_name=\"sklearn_regression_model\",\n",
|
||||
" tags={'area': \"diabetes\", 'type': \"regression\"},\n",
|
||||
" description=\"Ridge regression model to predict diabetes\",\n",
|
||||
" workspace=ws)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create Environment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import sklearn\n",
|
||||
"\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"\n",
|
||||
"environment = Environment(\"LocalDeploy\")\n",
|
||||
"environment.python.conda_dependencies.add_pip_package(\"inference-schema[numpy-support]\")\n",
|
||||
"environment.python.conda_dependencies.add_pip_package(\"joblib\")\n",
|
||||
"environment.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Provide the Scoring Script\n",
|
||||
"\n",
|
||||
"This Python script handles the model execution inside the service container. The `init()` method loads the model file, and `run(data)` is called for every input to the service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from inference_schema.schema_decorators import input_schema, output_schema\n",
|
||||
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # Deserialize the model file back into a sklearn model.\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
|
||||
"output_sample = np.array([3726.995])\n",
|
||||
"\n",
|
||||
"@input_schema('data', NumpyParameterType(input_sample))\n",
|
||||
"@output_schema(NumpyParameterType(output_sample))\n",
|
||||
"def run(data):\n",
|
||||
" try:\n",
|
||||
" result = model.predict(data)\n",
|
||||
" # You can return any JSON-serializable object.\n",
|
||||
" return result.tolist()\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create Inference Configuration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score.py\",\n",
|
||||
" environment=environment)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy Model as a Local Docker Web Service\n",
|
||||
"\n",
|
||||
"*Make sure you have Docker installed and running.*\n",
|
||||
"\n",
|
||||
"Note that the service creation can take few minutes.\n",
|
||||
"\n",
|
||||
"NOTE:\n",
|
||||
"\n",
|
||||
"The Docker image runs as a Linux container. If you are running Docker for Windows, you need to ensure the Linux Engine is running:\n",
|
||||
"\n",
|
||||
" # PowerShell command to switch to Linux engine\n",
|
||||
" & 'C:\\Program Files\\Docker\\Docker\\DockerCli.exe' -SwitchLinuxEngine"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"sample-localwebservice-deploy"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import LocalWebservice\n",
|
||||
"\n",
|
||||
"# This is optional, if not provided Docker will choose a random unused port.\n",
|
||||
"deployment_config = LocalWebservice.deploy_configuration(port=6789)\n",
|
||||
"\n",
|
||||
"local_service = Model.deploy(ws, \"test\", [model], inference_config, deployment_config)\n",
|
||||
"\n",
|
||||
"local_service.wait_for_deployment()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print('Local service port: {}'.format(local_service.port))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Check Status and Get Container Logs\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(local_service.get_logs())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test Web Service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Call the web service with some input data to get a prediction."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
"sample_input = json.dumps({\n",
|
||||
" 'data': dataset_x[0:2].tolist()\n",
|
||||
"})\n",
|
||||
"\n",
|
||||
"local_service.run(sample_input)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Reload Service\n",
|
||||
"\n",
|
||||
"You can update your score.py file and then call `reload()` to quickly restart the service. This will only reload your execution script and dependency files, it will not rebuild the underlying Docker image. As a result, `reload()` is fast, but if you do need to rebuild the image -- to add a new Conda or pip package, for instance -- you will have to call `update()`, instead (see below)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import joblib\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from inference_schema.schema_decorators import input_schema, output_schema\n",
|
||||
"from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # Deserialize the model file back into a sklearn model.\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
"input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])\n",
|
||||
"output_sample = np.array([3726.995])\n",
|
||||
"\n",
|
||||
"@input_schema('data', NumpyParameterType(input_sample))\n",
|
||||
"@output_schema(NumpyParameterType(output_sample))\n",
|
||||
"def run(data):\n",
|
||||
" try:\n",
|
||||
" result = model.predict(data)\n",
|
||||
" # You can return any JSON-serializable object.\n",
|
||||
" return 'Hello from the updated score.py: ' + str(result.tolist())\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"local_service.reload()\n",
|
||||
"print(\"--------------------------------------------------------------\")\n",
|
||||
"\n",
|
||||
"# After calling reload(), run() will return the updated message.\n",
|
||||
"local_service.run(sample_input)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Update Service\n",
|
||||
"\n",
|
||||
"If you want to change your model(s), Conda dependencies or deployment configuration, call `update()` to rebuild the Docker image.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"local_service.update(models=[model],\n",
|
||||
" inference_config=inference_config,\n",
|
||||
" deployment_config=deployment_config)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy model to AKS cluster based on the LocalWebservice's configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This is a one time setup for AKS Cluster. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it.\n",
|
||||
"from azureml.core.compute import AksCompute, ComputeTarget\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# Choose a name for your AKS cluster\n",
|
||||
"aks_name = 'my-aks-9' \n",
|
||||
"\n",
|
||||
"# Verify the cluster does not exist already\n",
|
||||
"try:\n",
|
||||
" aks_target = ComputeTarget(workspace=ws, name=aks_name)\n",
|
||||
" print('Found existing cluster, use it.')\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" # Use the default configuration (can also provide parameters to customize)\n",
|
||||
" prov_config = AksCompute.provisioning_configuration()\n",
|
||||
"\n",
|
||||
" # Create the cluster\n",
|
||||
" aks_target = ComputeTarget.create(workspace = ws, \n",
|
||||
" name = aks_name, \n",
|
||||
" provisioning_configuration = prov_config)\n",
|
||||
"\n",
|
||||
"if aks_target.get_status() != \"Succeeded\":\n",
|
||||
" aks_target.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import AksWebservice\n",
|
||||
"# Set the web service configuration (using default here)\n",
|
||||
"aks_config = AksWebservice.deploy_configuration()\n",
|
||||
"\n",
|
||||
"# # Enable token auth and disable (key) auth on the webservice\n",
|
||||
"# aks_config = AksWebservice.deploy_configuration(token_auth_enabled=True, auth_enabled=False)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"aks_service_name ='aks-service-1'\n",
|
||||
"\n",
|
||||
"aks_service = local_service.deploy_to_cloud(name=aks_service_name,\n",
|
||||
" deployment_config=aks_config,\n",
|
||||
" deployment_target=aks_target)\n",
|
||||
"\n",
|
||||
"aks_service.wait_for_deployment(show_output = True)\n",
|
||||
"print(aks_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Test aks service\n",
|
||||
"\n",
|
||||
"sample_input = json.dumps({\n",
|
||||
" 'data': dataset_x[0:2].tolist()\n",
|
||||
"})\n",
|
||||
"\n",
|
||||
"aks_service.run(sample_input)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Delete the service if not needed.\n",
|
||||
"aks_service.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Delete Service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"local_service.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "keriehm"
|
||||
}
|
||||
],
|
||||
"category": "tutorial",
|
||||
"compute": [
|
||||
"Local"
|
||||
],
|
||||
"datasets": [
|
||||
"None"
|
||||
],
|
||||
"deployment": [
|
||||
"Local"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"framework": [
|
||||
"None"
|
||||
],
|
||||
"friendly_name": "Register a model and deploy locally",
|
||||
"index_order": 1,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.8"
|
||||
},
|
||||
"star_tag": [],
|
||||
"tags": [
|
||||
"None"
|
||||
],
|
||||
"task": "Deployment"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,498 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Enabling App Insights for Services in Production\n",
|
||||
"With this notebook, you can learn how to enable App Insights for standard service monitoring, plus, we provide examples for doing custom logging within a scoring files in a model.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## What does Application Insights monitor?\n",
|
||||
"It monitors request rates, response times, failure rates, etc. For more information visit [App Insights docs.](https://docs.microsoft.com/en-us/azure/application-insights/app-insights-overview)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## What is different compared to standard production deployment process?\n",
|
||||
"If you want to enable generic App Insights for a service run:\n",
|
||||
"```python\n",
|
||||
"aks_service= Webservice(ws, \"aks-w-dc2\")\n",
|
||||
"aks_service.update(enable_app_insights=True)```\n",
|
||||
"Where \"aks-w-dc2\" is your service name. You can also do this from the Azure Portal under your Workspace--> deployments--> Select deployment--> Edit--> Advanced Settings--> Select \"Enable AppInsights diagnostics\"\n",
|
||||
"\n",
|
||||
"If you want to log custom traces, you will follow the standard deplyment process for AKS and you will:\n",
|
||||
"1. Update scoring file.\n",
|
||||
"2. Update aks configuration.\n",
|
||||
"3. Deploy the model with this new configuration. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 1. Import your dependencies"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"import json\n",
|
||||
"\n",
|
||||
"from azureml.core import Workspace\n",
|
||||
"from azureml.core.compute import AksCompute, ComputeTarget\n",
|
||||
"from azureml.core.webservice import AksWebservice\n",
|
||||
"\n",
|
||||
"print(azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 2. Set up your configuration and create a workspace\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 3. Register Model\n",
|
||||
"Register an existing trained model, add descirption and tags."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Model\n",
|
||||
"\n",
|
||||
"model = Model.register(model_path=\"sklearn_regression_model.pkl\", # This points to a local file.\n",
|
||||
" model_name=\"sklearn_regression_model.pkl\", # This is the name the model is registered as.\n",
|
||||
" tags={'area': \"diabetes\", 'type': \"regression\"},\n",
|
||||
" description=\"Ridge regression model to predict diabetes\",\n",
|
||||
" workspace=ws)\n",
|
||||
"\n",
|
||||
"print(model.name, model.description, model.version)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 4. *Update your scoring file with custom print statements*\n",
|
||||
"Here is an example:\n",
|
||||
"### a. In your init function add:\n",
|
||||
"```python\n",
|
||||
"print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))```\n",
|
||||
"\n",
|
||||
"### b. In your run function add:\n",
|
||||
"```python\n",
|
||||
"print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import os\n",
|
||||
"import pickle\n",
|
||||
"import json\n",
|
||||
"import numpy\n",
|
||||
"import joblib\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"import time\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model\n",
|
||||
" #Print statement for appinsights custom traces:\n",
|
||||
" print (\"model initialized\" + time.strftime(\"%H:%M:%S\"))\n",
|
||||
"\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
"\n",
|
||||
" # deserialize the model file back into a sklearn model\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# note you can pass in multiple rows for scoring\n",
|
||||
"def run(raw_data):\n",
|
||||
" try:\n",
|
||||
" data = json.loads(raw_data)['data']\n",
|
||||
" data = numpy.array(data)\n",
|
||||
" result = model.predict(data)\n",
|
||||
" print (\"Prediction created\" + time.strftime(\"%H:%M:%S\"))\n",
|
||||
" # you can return any datatype as long as it is JSON-serializable\n",
|
||||
" return result.tolist()\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" print (error + time.strftime(\"%H:%M:%S\"))\n",
|
||||
" return error"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 5. *Create myenv.yml file*\n",
|
||||
"Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"myenv = CondaDependencies.create(conda_packages=['numpy==1.19.5','scikit-learn==0.22.1'],\n",
|
||||
" pip_packages=['azureml-defaults'])\n",
|
||||
"\n",
|
||||
"with open(\"myenv.yml\",\"w\") as f:\n",
|
||||
" f.write(myenv.serialize_to_string())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 6. Create Inference Configuration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy to ACI (Optional)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import AciWebservice\n",
|
||||
"\n",
|
||||
"aci_deployment_config = AciWebservice.deploy_configuration(cpu_cores=1,\n",
|
||||
" memory_gb=1,\n",
|
||||
" tags={'area': \"diabetes\", 'type': \"regression\"},\n",
|
||||
" description=\"Predict diabetes using regression model\",\n",
|
||||
" enable_app_insights=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aci_service_name = \"aci-service-appinsights\"\n",
|
||||
"\n",
|
||||
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aci_deployment_config, overwrite=True)\n",
|
||||
"aci_service.wait_for_deployment(show_output=True)\n",
|
||||
"\n",
|
||||
"print(aci_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if aci_service.state == \"Healthy\":\n",
|
||||
" test_sample = json.dumps({\n",
|
||||
" \"data\": [\n",
|
||||
" [1,28,13,45,54,6,57,8,8,10],\n",
|
||||
" [101,9,8,37,6,45,4,3,2,41]\n",
|
||||
" ]\n",
|
||||
" })\n",
|
||||
"\n",
|
||||
" prediction = aci_service.run(test_sample)\n",
|
||||
"\n",
|
||||
" print(prediction)\n",
|
||||
"else:\n",
|
||||
" raise ValueError(\"Service deployment isn't healthy, can't call the service. Error: \", aci_service.error)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 7. Deploy to AKS service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create AKS compute if you haven't done so.\n",
|
||||
"\n",
|
||||
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, AksCompute\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"aks_name = \"my-aks-insights\"\n",
|
||||
"\n",
|
||||
"creating_compute = False\n",
|
||||
"try:\n",
|
||||
" aks_target = ComputeTarget(ws, aks_name)\n",
|
||||
" print(\"Using existing AKS compute target {}.\".format(aks_name))\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print(\"Creating a new AKS compute target {}.\".format(aks_name))\n",
|
||||
"\n",
|
||||
" # Use the default configuration (can also provide parameters to customize).\n",
|
||||
" prov_config = AksCompute.provisioning_configuration()\n",
|
||||
" aks_target = ComputeTarget.create(workspace=ws,\n",
|
||||
" name=aks_name,\n",
|
||||
" provisioning_configuration=prov_config)\n",
|
||||
" creating_compute = True"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"if creating_compute and aks_target.provisioning_state != \"Succeeded\":\n",
|
||||
" aks_target.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(aks_target.provisioning_state)\n",
|
||||
"print(aks_target.provisioning_errors)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you already have a cluster you can attach the service to it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"```python\n",
|
||||
"%%time\n",
|
||||
"resource_id = '/subscriptions/<subscriptionid>/resourcegroups/<resourcegroupname>/providers/Microsoft.ContainerService/managedClusters/<aksservername>'\n",
|
||||
"create_name= 'myaks4'\n",
|
||||
"attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
|
||||
"aks_target = ComputeTarget.attach(workspace=ws,\n",
|
||||
" name=create_name,\n",
|
||||
" attach_configuration=attach_config)\n",
|
||||
"## Wait for the operation to complete\n",
|
||||
"aks_target.wait_for_provisioning(True)```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### a. *Activate App Insights through updating AKS Webservice configuration*\n",
|
||||
"In order to enable App Insights in your service you will need to update your AKS configuration file:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set the web service configuration.\n",
|
||||
"aks_deployment_config = AksWebservice.deploy_configuration(enable_app_insights=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### b. Deploy your service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if aks_target.provisioning_state == \"Succeeded\":\n",
|
||||
" aks_service_name = \"aks-service-appinsights\"\n",
|
||||
" aks_service = Model.deploy(ws,\n",
|
||||
" aks_service_name,\n",
|
||||
" [model],\n",
|
||||
" inference_config,\n",
|
||||
" aks_deployment_config,\n",
|
||||
" deployment_target=aks_target,\n",
|
||||
" overwrite=True)\n",
|
||||
" aks_service.wait_for_deployment(show_output=True)\n",
|
||||
" print(aks_service.state)\n",
|
||||
"else:\n",
|
||||
" raise ValueError(\"AKS cluster provisioning failed. Error: \", aks_target.provisioning_errors)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 8. Test your service "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"\n",
|
||||
"if aks_service.state == \"Healthy\":\n",
|
||||
" test_sample = json.dumps({\n",
|
||||
" \"data\": [\n",
|
||||
" [1,28,13,45,54,6,57,8,8,10],\n",
|
||||
" [101,9,8,37,6,45,4,3,2,41]\n",
|
||||
" ]\n",
|
||||
" })\n",
|
||||
"\n",
|
||||
" prediction = aks_service.run(input_data=test_sample)\n",
|
||||
" print(prediction)\n",
|
||||
"else:\n",
|
||||
" raise ValueError(\"Service deployment isn't healthy, can't call the service. Error: \", aks_service.error)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## 9. See your service telemetry in App Insights\n",
|
||||
"1. Go to the [Azure Portal](https://portal.azure.com/)\n",
|
||||
"2. All resources--> Select the subscription/resource group where you created your Workspace--> Select the App Insights type\n",
|
||||
"3. Click on the AppInsights resource. You'll see a highlevel dashboard with information on Requests, Server response time and availability.\n",
|
||||
"4. Click on the top banner \"Analytics\"\n",
|
||||
"5. In the \"Schema\" section select \"traces\" and run your query.\n",
|
||||
"6. Voila! All your custom traces should be there."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Disable App Insights"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aks_service.update(enable_app_insights=False)\n",
|
||||
"aks_service.wait_for_deployment(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Clean up"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"aks_service.delete()\n",
|
||||
"aci_service.delete()\n",
|
||||
"model.delete()\n",
|
||||
"if creating_compute:\n",
|
||||
" aks_target.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "gopalv"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
name: enable-app-insights-in-production-service
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
Binary file not shown.
@@ -1,2 +0,0 @@
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y libgomp1
|
||||
@@ -1,39 +0,0 @@
|
||||
# ONNX on Azure Machine Learning
|
||||
|
||||
These tutorials show how to create and deploy Open Neural Network eXchange ([ONNX](http://onnx.ai)) models in Azure Machine Learning environments using [ONNX Runtime](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx) for inference. Once deployed as a web service, you can ping the model with your own set of images to be analyzed!
|
||||
|
||||
## Tutorials
|
||||
|
||||
0. If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, [Configure your Azure Machine Learning Workspace](../../../configuration.ipynb)
|
||||
|
||||
#### Obtain pretrained models from the [ONNX Model Zoo](https://github.com/onnx/models) and deploy with ONNX Runtime
|
||||
1. [MNIST - Handwritten Digit Classification with ONNX Runtime](onnx-inference-mnist-deploy.ipynb)
|
||||
2. [Emotion FER+ - Facial Expression Recognition with ONNX Runtime](onnx-inference-facial-expression-recognition-deploy.ipynb)
|
||||
|
||||
#### Train model on Azure ML, convert to ONNX, and deploy with ONNX Runtime
|
||||
3. [MNIST - Train using PyTorch and deploy with ONNX Runtime](onnx-train-pytorch-aml-deploy-mnist.ipynb)
|
||||
|
||||
#### Demo Notebooks from Microsoft Ignite 2018
|
||||
Note that the following notebooks do not have evaluation sections for the models since they were deployed as part of a live demo. You can find the respective pre-processing and post-processing code linked from the ONNX Model Zoo Github pages ([ResNet](https://github.com/onnx/models/tree/master/models/image_classification/resnet), [TinyYoloV2](https://github.com/onnx/models/tree/master/tiny_yolov2)), or experiment with the ONNX models by [running them in the browser](https://microsoft.github.io/onnxjs-demo/#/).
|
||||
|
||||
4. [ResNet50 - Image Recognition with ONNX Runtime](onnx-modelzoo-aml-deploy-resnet50.ipynb)
|
||||
5. [TinyYoloV2 - Convert from CoreML and deploy with ONNX Runtime](onnx-convert-aml-deploy-tinyyolo.ipynb)
|
||||
|
||||
## Documentation
|
||||
- [ONNX Runtime Python API Documentation](http://aka.ms/onnxruntime-python)
|
||||
- [Azure Machine Learning API Documentation](http://aka.ms/aml-docs)
|
||||
|
||||
## Related Articles
|
||||
- [Building and Deploying ONNX Runtime Models](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx)
|
||||
- [Azure AI – Making AI Real for Business](https://aka.ms/aml-blog-overview)
|
||||
- [What’s new in Azure Machine Learning](https://aka.ms/aml-blog-whats-new)
|
||||
|
||||
## License
|
||||
Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
Licensed under the MIT License.
|
||||
|
||||
## Acknowledgements
|
||||
These tutorials were developed by Vinitra Swamy and Prasanth Pulavarthi of the Microsoft AI Frameworks team and adapted for presentation at Microsoft Ignite 2018.
|
||||
|
||||
|
||||

|
||||
Binary file not shown.
@@ -1,135 +0,0 @@
|
||||
# This is a modified version of https://github.com/pytorch/examples/blob/master/mnist/main.py which is
|
||||
# licensed under BSD 3-Clause (https://github.com/pytorch/examples/blob/master/LICENSE)
|
||||
|
||||
from __future__ import print_function
|
||||
import argparse
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
import torch.optim as optim
|
||||
from torchvision import datasets, transforms
|
||||
import os
|
||||
|
||||
|
||||
class Net(nn.Module):
|
||||
def __init__(self):
|
||||
super(Net, self).__init__()
|
||||
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
|
||||
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
|
||||
self.conv2_drop = nn.Dropout2d()
|
||||
self.fc1 = nn.Linear(320, 50)
|
||||
self.fc2 = nn.Linear(50, 10)
|
||||
|
||||
def forward(self, x):
|
||||
x = F.relu(F.max_pool2d(self.conv1(x), 2))
|
||||
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
|
||||
x = x.view(-1, 320)
|
||||
x = F.relu(self.fc1(x))
|
||||
x = F.dropout(x, training=self.training)
|
||||
x = self.fc2(x)
|
||||
return F.log_softmax(x, dim=1)
|
||||
|
||||
|
||||
def train(args, model, device, train_loader, optimizer, epoch, output_dir):
|
||||
model.train()
|
||||
for batch_idx, (data, target) in enumerate(train_loader):
|
||||
data, target = data.to(device), target.to(device)
|
||||
optimizer.zero_grad()
|
||||
output = model(data)
|
||||
loss = F.nll_loss(output, target)
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
if batch_idx % args.log_interval == 0:
|
||||
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
|
||||
epoch, batch_idx * len(data), len(train_loader.dataset),
|
||||
100. * batch_idx / len(train_loader), loss.item()))
|
||||
|
||||
|
||||
def test(args, model, device, test_loader):
|
||||
model.eval()
|
||||
test_loss = 0
|
||||
correct = 0
|
||||
with torch.no_grad():
|
||||
for data, target in test_loader:
|
||||
data, target = data.to(device), target.to(device)
|
||||
output = model(data)
|
||||
test_loss += F.nll_loss(output, target, size_average=False, reduce=True).item() # sum up batch loss
|
||||
pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
|
||||
correct += pred.eq(target.view_as(pred)).sum().item()
|
||||
|
||||
test_loss /= len(test_loader.dataset)
|
||||
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
|
||||
test_loss, correct, len(test_loader.dataset),
|
||||
100. * correct / len(test_loader.dataset)))
|
||||
|
||||
|
||||
def main():
|
||||
# Training settings
|
||||
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
|
||||
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
|
||||
help='input batch size for training (default: 64)')
|
||||
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
|
||||
help='input batch size for testing (default: 1000)')
|
||||
parser.add_argument('--epochs', type=int, default=5, metavar='N',
|
||||
help='number of epochs to train (default: 5)')
|
||||
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
|
||||
help='learning rate (default: 0.01)')
|
||||
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
|
||||
help='SGD momentum (default: 0.5)')
|
||||
parser.add_argument('--no-cuda', action='store_true', default=False,
|
||||
help='disables CUDA training')
|
||||
parser.add_argument('--seed', type=int, default=1, metavar='S',
|
||||
help='random seed (default: 1)')
|
||||
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
|
||||
help='how many batches to wait before logging training status')
|
||||
parser.add_argument('--output-dir', type=str, default='outputs')
|
||||
args = parser.parse_args()
|
||||
use_cuda = not args.no_cuda and torch.cuda.is_available()
|
||||
|
||||
torch.manual_seed(args.seed)
|
||||
|
||||
device = torch.device("cuda" if use_cuda else "cpu")
|
||||
|
||||
output_dir = args.output_dir
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
|
||||
# Use Azure Open Datasets for MNIST dataset
|
||||
datasets.MNIST.resources = [
|
||||
("https://azureopendatastorage.azurefd.net/mnist/train-images-idx3-ubyte.gz",
|
||||
"f68b3c2dcbeaaa9fbdd348bbdeb94873"),
|
||||
("https://azureopendatastorage.azurefd.net/mnist/train-labels-idx1-ubyte.gz",
|
||||
"d53e105ee54ea40749a09fcbcd1e9432"),
|
||||
("https://azureopendatastorage.azurefd.net/mnist/t10k-images-idx3-ubyte.gz",
|
||||
"9fb629c4189551a2d022fa330f9573f3"),
|
||||
("https://azureopendatastorage.azurefd.net/mnist/t10k-labels-idx1-ubyte.gz",
|
||||
"ec29112dd5afa0611ce80d1b7f02629c")
|
||||
]
|
||||
train_loader = torch.utils.data.DataLoader(
|
||||
datasets.MNIST('data', train=True, download=True,
|
||||
transform=transforms.Compose([transforms.ToTensor(),
|
||||
transforms.Normalize((0.1307,), (0.3081,))])
|
||||
),
|
||||
batch_size=args.batch_size, shuffle=True, **kwargs)
|
||||
test_loader = torch.utils.data.DataLoader(
|
||||
datasets.MNIST('data', train=False,
|
||||
transform=transforms.Compose([transforms.ToTensor(),
|
||||
transforms.Normalize((0.1307,), (0.3081,))])
|
||||
),
|
||||
batch_size=args.test_batch_size, shuffle=True, **kwargs)
|
||||
|
||||
model = Net().to(device)
|
||||
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
|
||||
|
||||
for epoch in range(1, args.epochs + 1):
|
||||
train(args, model, device, train_loader, optimizer, epoch, output_dir)
|
||||
test(args, model, device, test_loader)
|
||||
|
||||
# save model
|
||||
dummy_input = torch.randn(1, 1, 28, 28, device=device)
|
||||
model_path = os.path.join(output_dir, 'mnist.onnx')
|
||||
torch.onnx.export(model, dummy_input, model_path)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,434 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# YOLO Real-time Object Detection using ONNX on AzureML\n",
|
||||
"\n",
|
||||
"This example shows how to convert the TinyYOLO model from CoreML to ONNX and operationalize it as a web service using Azure Machine Learning services and the ONNX Runtime.\n",
|
||||
"\n",
|
||||
"## What is ONNX\n",
|
||||
"ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).\n",
|
||||
"\n",
|
||||
"## YOLO Details\n",
|
||||
"You Only Look Once (YOLO) is a state-of-the-art, real-time object detection system. For more information about YOLO, please visit the [YOLO website](https://pjreddie.com/darknet/yolo/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"To make the best use of your time, make sure you have done the following:\n",
|
||||
"\n",
|
||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
||||
"* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook to:\n",
|
||||
" * install the AML SDK\n",
|
||||
" * create a workspace and its configuration file (config.json)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Install necessary packages\n",
|
||||
"\n",
|
||||
"You'll need to run the following commands to use this tutorial:\n",
|
||||
"\n",
|
||||
"```sh\n",
|
||||
"pip install onnxmltools\n",
|
||||
"pip install coremltools # use this on Linux and Mac\n",
|
||||
"pip install git+https://github.com/apple/coremltools # use this on Windows\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Convert model to ONNX\n",
|
||||
"\n",
|
||||
"First we download the CoreML model. We use the CoreML model from [Matthijs Hollemans's tutorial](https://github.com/hollance/YOLO-CoreML-MPSNNGraph). This may take a few minutes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import urllib.request\n",
|
||||
"\n",
|
||||
"coreml_model_url = \"https://github.com/hollance/YOLO-CoreML-MPSNNGraph/raw/master/TinyYOLO-CoreML/TinyYOLO-CoreML/TinyYOLO.mlmodel\"\n",
|
||||
"urllib.request.urlretrieve(coreml_model_url, filename=\"TinyYOLO.mlmodel\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then we use ONNXMLTools to convert the model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import onnxmltools\n",
|
||||
"import coremltools\n",
|
||||
"\n",
|
||||
"# Load a CoreML model\n",
|
||||
"coreml_model = coremltools.utils.load_spec('TinyYOLO.mlmodel')\n",
|
||||
"\n",
|
||||
"# Convert from CoreML into ONNX\n",
|
||||
"onnx_model = onnxmltools.convert_coreml(coreml_model, 'TinyYOLOv2')\n",
|
||||
"\n",
|
||||
"# Fix the preprocessor bias in the ImageScaler\n",
|
||||
"for init in onnx_model.graph.initializer:\n",
|
||||
" if init.name == 'scalerPreprocessor_bias':\n",
|
||||
" init.dims[1] = 1\n",
|
||||
"\n",
|
||||
"# Save ONNX model\n",
|
||||
"onnxmltools.utils.save_model(onnx_model, 'tinyyolov2.onnx')\n",
|
||||
"\n",
|
||||
"import os\n",
|
||||
"print(os.path.getsize('tinyyolov2.onnx'))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploying as a web service with Azure ML\n",
|
||||
"\n",
|
||||
"### Load Azure ML workspace\n",
|
||||
"\n",
|
||||
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.location, ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Registering your model with Azure ML\n",
|
||||
"\n",
|
||||
"Now we upload the model and register it in the workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"model = Model.register(model_path = \"tinyyolov2.onnx\",\n",
|
||||
" model_name = \"tinyyolov2\",\n",
|
||||
" tags = {\"onnx\": \"demo\"},\n",
|
||||
" description = \"TinyYOLO\",\n",
|
||||
" workspace = ws)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Displaying your registered models\n",
|
||||
"\n",
|
||||
"You can optionally list out all the models that you have registered in this workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"models = ws.models\n",
|
||||
"for name, m in models.items():\n",
|
||||
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Write scoring file\n",
|
||||
"\n",
|
||||
"We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import json\n",
|
||||
"import time\n",
|
||||
"import sys\n",
|
||||
"import os\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"import numpy as np # we're going to use numpy to process input and output data\n",
|
||||
"import onnxruntime # to inference ONNX models, we use the ONNX Runtime\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global session\n",
|
||||
" model = Model.get_model_path(model_name = 'tinyyolov2')\n",
|
||||
" session = onnxruntime.InferenceSession(model)\n",
|
||||
"\n",
|
||||
"def preprocess(input_data_json):\n",
|
||||
" # convert the JSON data into the tensor input\n",
|
||||
" return np.array(json.loads(input_data_json)['data']).astype('float32')\n",
|
||||
"\n",
|
||||
"def postprocess(result):\n",
|
||||
" return np.array(result).tolist()\n",
|
||||
"\n",
|
||||
"def run(input_data_json):\n",
|
||||
" try:\n",
|
||||
" start = time.time() # start timer\n",
|
||||
" input_data = preprocess(input_data_json)\n",
|
||||
" input_name = session.get_inputs()[0].name # get the id of the first input of the model \n",
|
||||
" result = session.run([], {input_name: input_data})\n",
|
||||
" end = time.time() # stop timer\n",
|
||||
" return {\"result\": postprocess(result),\n",
|
||||
" \"time\": end - start}\n",
|
||||
" except Exception as e:\n",
|
||||
" result = str(e)\n",
|
||||
" return {\"error\": result}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setting up inference configuration\n",
|
||||
"First we create a YAML file that specifies which dependencies we would like to see in our container. Please note that you must include azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
||||
"\n",
|
||||
"myenv = CondaDependencies.create(pip_packages=[\"numpy\", \"onnxruntime==1.15.1\", \"azureml-core\", \"azureml-defaults\"])\n",
|
||||
"\n",
|
||||
"with open(\"myenv.yml\",\"w\") as f:\n",
|
||||
" f.write(myenv.serialize_to_string())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then we create the inference configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deploy the model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import AciWebservice\n",
|
||||
"\n",
|
||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||
" memory_gb = 1, \n",
|
||||
" tags = {'demo': 'onnx'}, \n",
|
||||
" description = 'web service for TinyYOLO ONNX model')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following cell will take a few minutes to run as the model gets packaged up and deployed to ACI."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aci_service_name = 'my-aci-service-tiny-yolo'\n",
|
||||
"print(\"Service\", aci_service_name)\n",
|
||||
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||
"aci_service.wait_for_deployment(True)\n",
|
||||
"print(aci_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if aci_service.state != 'Healthy':\n",
|
||||
" # run this command for debugging.\n",
|
||||
" print(aci_service.get_logs())\n",
|
||||
" aci_service.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Success!\n",
|
||||
"\n",
|
||||
"If you've made it this far, you've deployed a working web service that does object detection using an ONNX model. You can get the URL for the webservice with the code below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(aci_service.scoring_uri)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"When you are eventually done using the web service, remember to delete it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aci_service.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "viswamy"
|
||||
}
|
||||
],
|
||||
"category": "deployment",
|
||||
"compute": [
|
||||
"local"
|
||||
],
|
||||
"datasets": [
|
||||
"PASCAL VOC"
|
||||
],
|
||||
"deployment": [
|
||||
"Azure Container Instance"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"framework": [
|
||||
"ONNX"
|
||||
],
|
||||
"friendly_name": "Convert and deploy TinyYolo with ONNX Runtime",
|
||||
"index_order": 5,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.5"
|
||||
},
|
||||
"star_tag": [
|
||||
"featured"
|
||||
],
|
||||
"tags": [
|
||||
"ONNX Converter"
|
||||
],
|
||||
"task": "Object Detection"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
name: onnx-convert-aml-deploy-tinyyolo
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- numpy
|
||||
- git+https://github.com/apple/coremltools@v2.1
|
||||
- onnx<1.7.0
|
||||
- onnxmltools
|
||||
@@ -1,801 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Facial Expression Recognition (FER+) using ONNX Runtime on Azure ML\n",
|
||||
"\n",
|
||||
"This example shows how to deploy an image classification neural network using the Facial Expression Recognition ([FER](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data)) dataset and Open Neural Network eXchange format ([ONNX](http://aka.ms/onnxdocarticle)) on the Azure Machine Learning platform. This tutorial will show you how to deploy a FER+ model from the [ONNX model zoo](https://github.com/onnx/models), use it to make predictions using ONNX Runtime Inference, and deploy it as a web service in Azure.\n",
|
||||
"\n",
|
||||
"Throughout this tutorial, we will be referring to ONNX, a neural network exchange format used to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools (CNTK, PyTorch, Caffe, MXNet, TensorFlow) and choose the combination that is best for them. ONNX is developed and supported by a community of partners including Microsoft AI, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai) and [open source files](https://github.com/onnx).\n",
|
||||
"\n",
|
||||
"[ONNX Runtime](https://aka.ms/onnxruntime-python) is the runtime engine that enables evaluation of trained machine learning (Traditional ML and Deep Learning) models with high performance and low resource utilization. We use the CPU version of ONNX Runtime in this tutorial, but will soon be releasing an additional tutorial for deploying this model using ONNX Runtime GPU.\n",
|
||||
"\n",
|
||||
"#### Tutorial Objectives:\n",
|
||||
"\n",
|
||||
"1. Describe the FER+ dataset and pretrained Convolutional Neural Net ONNX model for Emotion Recognition, stored in the ONNX model zoo.\n",
|
||||
"2. Deploy and run the pretrained FER+ ONNX model on an Azure Machine Learning instance\n",
|
||||
"3. Predict labels for test set data points in the cloud using ONNX Runtime and Azure ML"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"### 1. Install Azure ML SDK and create a new workspace\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
|
||||
"\n",
|
||||
"### 2. Install additional packages needed for this Notebook\n",
|
||||
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n",
|
||||
"\n",
|
||||
"```sh\n",
|
||||
"(myenv) $ pip install matplotlib onnx opencv-python\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"**Debugging tip**: Make sure that to activate your virtual environment (myenv) before you re-launch this notebook using the `jupyter notebook` comand. Choose the respective Python kernel for your new virtual environment using the `Kernel > Change Kernel` menu above. If you have completed the steps correctly, the upper right corner of your screen should state `Python [conda env:myenv]` instead of `Python [default]`.\n",
|
||||
"\n",
|
||||
"### 3. Download sample data and pre-trained ONNX model from ONNX Model Zoo.\n",
|
||||
"\n",
|
||||
"In the following lines of code, we download [the trained ONNX Emotion FER+ model and corresponding test data](https://github.com/onnx/models/tree/master/vision/body_analysis/emotion_ferplus) and place them in the same folder as this tutorial notebook. For more information about the FER+ dataset, please visit Microsoft Researcher Emad Barsoum's [FER+ source data repository](https://github.com/ebarsoum/FERPlus)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# urllib is a built-in Python library to download files from URLs\n",
|
||||
"\n",
|
||||
"# Objective: retrieve the latest version of the ONNX Emotion FER+ model files from the\n",
|
||||
"# ONNX Model Zoo and save it in the same folder as this tutorial\n",
|
||||
"\n",
|
||||
"import urllib.request\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"onnx_model_url = \"https://github.com/onnx/models/blob/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.tar.gz?raw=true\"\n",
|
||||
"\n",
|
||||
"urllib.request.urlretrieve(onnx_model_url, filename=\"emotion-ferplus-7.tar.gz\")\n",
|
||||
"os.mkdir(\"emotion_ferplus\")\n",
|
||||
"\n",
|
||||
"# the ! magic command tells our jupyter notebook kernel to run the following line of \n",
|
||||
"# code from the command line instead of the notebook kernel\n",
|
||||
"\n",
|
||||
"# We use tar and xvcf to unzip the files we just retrieved from the ONNX model zoo\n",
|
||||
"\n",
|
||||
"!tar xvzf emotion-ferplus-7.tar.gz -C emotion_ferplus"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy a VM with your ONNX model in the Cloud\n",
|
||||
"\n",
|
||||
"### Load Azure ML workspace\n",
|
||||
"\n",
|
||||
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.location, ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Registering your model with Azure ML"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model_dir = \"emotion_ferplus/model\" # replace this with the location of your model files\n",
|
||||
"\n",
|
||||
"# leave as is if it's in the same folder as this notebook"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"model = Model.register(model_path = model_dir + \"/\" + \"model.onnx\",\n",
|
||||
" model_name = \"onnx_emotion\",\n",
|
||||
" tags = {\"onnx\": \"demo\"},\n",
|
||||
" description = \"FER+ emotion recognition CNN from ONNX Model Zoo\",\n",
|
||||
" workspace = ws)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Optional: Displaying your registered models\n",
|
||||
"\n",
|
||||
"This step is not required, so feel free to skip it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"models = ws.models\n",
|
||||
"for name, m in models.items():\n",
|
||||
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### ONNX FER+ Model Methodology\n",
|
||||
"\n",
|
||||
"The image classification model we are using is pre-trained using Microsoft's deep learning cognitive toolkit, [CNTK](https://github.com/Microsoft/CNTK), from the [ONNX model zoo](http://github.com/onnx/models). The model zoo has many other models that can be deployed on cloud providers like AzureML without any additional training. To ensure that our cloud deployed model works, we use testing data from the well-known FER+ data set, provided as part of the [trained Emotion Recognition model](https://github.com/onnx/models/tree/master/vision/body_analysis/emotion_ferplus) in the ONNX model zoo.\n",
|
||||
"\n",
|
||||
"The original Facial Emotion Recognition (FER) Dataset was released in 2013 by Pierre-Luc Carrier and Aaron Courville as part of a [Kaggle Competition](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data), but some of the labels are not entirely appropriate for the expression. In the FER+ Dataset, each photo was evaluated by at least 10 croud sourced reviewers, creating a more accurate basis for ground truth. \n",
|
||||
"\n",
|
||||
"You can see the difference of label quality in the sample model input below. The FER labels are the first word below each image, and the FER+ labels are the second word below each image.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"***Input: Photos of cropped faces from FER+ Dataset***\n",
|
||||
"\n",
|
||||
"***Task: Classify each facial image into its appropriate emotions in the emotion table***\n",
|
||||
"\n",
|
||||
"``` emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, 'anger':4, 'disgust':5, 'fear':6, 'contempt':7} ```\n",
|
||||
"\n",
|
||||
"***Output: Emotion prediction for input image***\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Remember, once the application is deployed in Azure ML, you can use your own images as input for the model to classify."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# for images and plots in this notebook\n",
|
||||
"import matplotlib.pyplot as plt \n",
|
||||
"\n",
|
||||
"# display images inline\n",
|
||||
"%matplotlib inline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Model Description\n",
|
||||
"\n",
|
||||
"The FER+ model from the ONNX Model Zoo is summarized by the graphic below. You can see the entire workflow of our pre-trained model in the following image from Barsoum et. al's paper [\"Training Deep Networks for Facial Expression Recognition\n",
|
||||
"with Crowd-Sourced Label Distribution\"](https://arxiv.org/pdf/1608.01041.pdf), with our (64 x 64) input images and our output probabilities for each of the labels."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Specify our Score and Environment Files"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file. You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n",
|
||||
"\n",
|
||||
"### Write Score File\n",
|
||||
"\n",
|
||||
"A score file is what tells our Azure cloud service what to do. After initializing our model using azureml.core.model, we start an ONNX Runtime inference session to evaluate the data passed in on our function calls."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"import onnxruntime\n",
|
||||
"import sys\n",
|
||||
"import os\n",
|
||||
"import time\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global session, input_name, output_name\n",
|
||||
" model = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model.onnx')\n",
|
||||
" session = onnxruntime.InferenceSession(model, None)\n",
|
||||
" input_name = session.get_inputs()[0].name\n",
|
||||
" output_name = session.get_outputs()[0].name \n",
|
||||
" \n",
|
||||
"def run(input_data):\n",
|
||||
" '''Purpose: evaluate test input in Azure Cloud using onnxruntime.\n",
|
||||
" We will call the run function later from our Jupyter Notebook \n",
|
||||
" so our azure service can evaluate our model input in the cloud. '''\n",
|
||||
"\n",
|
||||
" try:\n",
|
||||
" # load in our data, convert to readable format\n",
|
||||
" data = np.array(json.loads(input_data)['data']).astype('float32')\n",
|
||||
" \n",
|
||||
" start = time.time()\n",
|
||||
" r = session.run([output_name], {input_name : data})\n",
|
||||
" end = time.time()\n",
|
||||
" \n",
|
||||
" result = emotion_map(postprocess(r[0]))\n",
|
||||
" \n",
|
||||
" result_dict = {\"result\": result,\n",
|
||||
" \"time_in_sec\": [end - start]}\n",
|
||||
" except Exception as e:\n",
|
||||
" result_dict = {\"error\": str(e)}\n",
|
||||
" \n",
|
||||
" return json.dumps(result_dict)\n",
|
||||
"\n",
|
||||
"def emotion_map(classes, N=1):\n",
|
||||
" \"\"\"Take the most probable labels (output of postprocess) and returns the \n",
|
||||
" top N emotional labels that fit the picture.\"\"\"\n",
|
||||
" \n",
|
||||
" emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, \n",
|
||||
" 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}\n",
|
||||
" \n",
|
||||
" emotion_keys = list(emotion_table.keys())\n",
|
||||
" emotions = []\n",
|
||||
" for i in range(N):\n",
|
||||
" emotions.append(emotion_keys[classes[i]])\n",
|
||||
" return emotions\n",
|
||||
"\n",
|
||||
"def softmax(x):\n",
|
||||
" \"\"\"Compute softmax values (probabilities from 0 to 1) for each possible label.\"\"\"\n",
|
||||
" x = x.reshape(-1)\n",
|
||||
" e_x = np.exp(x - np.max(x))\n",
|
||||
" return e_x / e_x.sum(axis=0)\n",
|
||||
"\n",
|
||||
"def postprocess(scores):\n",
|
||||
" \"\"\"This function takes the scores generated by the network and \n",
|
||||
" returns the class IDs in decreasing order of probability.\"\"\"\n",
|
||||
" prob = softmax(scores)\n",
|
||||
" prob = np.squeeze(prob)\n",
|
||||
" classes = np.argsort(prob)[::-1]\n",
|
||||
" return classes"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Write Environment File\n",
|
||||
"Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"myenv = CondaDependencies.create(pip_packages=[\"numpy\", \"onnxruntime==1.15.1\", \"azureml-core\", \"azureml-defaults\"])\n",
|
||||
"\n",
|
||||
"with open(\"myenv.yml\",\"w\") as f:\n",
|
||||
" f.write(myenv.serialize_to_string())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Setup inference configuration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deploy the model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import AciWebservice\n",
|
||||
"\n",
|
||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||
" memory_gb = 1, \n",
|
||||
" tags = {'demo': 'onnx'}, \n",
|
||||
" description = 'ONNX for emotion recognition model')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following cell will likely take a few minutes to run as well."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aci_service_name = 'onnx-demo-emotion'\n",
|
||||
"print(\"Service\", aci_service_name)\n",
|
||||
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||
"aci_service.wait_for_deployment(True)\n",
|
||||
"print(aci_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if aci_service.state != 'Healthy':\n",
|
||||
" # run this command for debugging.\n",
|
||||
" print(aci_service.get_logs())\n",
|
||||
"\n",
|
||||
" # If your deployment fails, make sure to delete your aci_service before trying again!\n",
|
||||
" # aci_service.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Success!\n",
|
||||
"\n",
|
||||
"If you've made it this far, you've deployed a working VM with a facial emotion recognition model running in the cloud using Azure ML. Congratulations!\n",
|
||||
"\n",
|
||||
"Let's see how well our model deals with our test images."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Testing and Evaluation\n",
|
||||
"\n",
|
||||
"### Useful Helper Functions\n",
|
||||
"\n",
|
||||
"We preprocess and postprocess our data (see score.py file) using the helper functions specified in the [ONNX FER+ Model page in the Model Zoo repository](https://github.com/onnx/models/tree/master/vision/body_analysis/emotion_ferplus)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def emotion_map(classes, N=1):\n",
|
||||
" \"\"\"Take the most probable labels (output of postprocess) and returns the \n",
|
||||
" top N emotional labels that fit the picture.\"\"\"\n",
|
||||
" \n",
|
||||
" emotion_table = {'neutral':0, 'happiness':1, 'surprise':2, 'sadness':3, \n",
|
||||
" 'anger':4, 'disgust':5, 'fear':6, 'contempt':7}\n",
|
||||
" \n",
|
||||
" emotion_keys = list(emotion_table.keys())\n",
|
||||
" emotions = []\n",
|
||||
" for c in range(N):\n",
|
||||
" emotions.append(emotion_keys[classes[c]])\n",
|
||||
" return emotions\n",
|
||||
"\n",
|
||||
"def softmax(x):\n",
|
||||
" \"\"\"Compute softmax values (probabilities from 0 to 1) for each possible label.\"\"\"\n",
|
||||
" x = x.reshape(-1)\n",
|
||||
" e_x = np.exp(x - np.max(x))\n",
|
||||
" return e_x / e_x.sum(axis=0)\n",
|
||||
"\n",
|
||||
"def postprocess(scores):\n",
|
||||
" \"\"\"This function takes the scores generated by the network and \n",
|
||||
" returns the class IDs in decreasing order of probability.\"\"\"\n",
|
||||
" prob = softmax(scores)\n",
|
||||
" prob = np.squeeze(prob)\n",
|
||||
" classes = np.argsort(prob)[::-1]\n",
|
||||
" return classes"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load Test Data\n",
|
||||
"\n",
|
||||
"These are already in your directory from your ONNX model download (from the model zoo).\n",
|
||||
"\n",
|
||||
"Notice that our Model Zoo files have a .pb extension. This is because they are [protobuf files (Protocol Buffers)](https://developers.google.com/protocol-buffers/docs/pythontutorial), so we need to read in our data through our ONNX TensorProto reader into a format we can work with, like numerical arrays."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# to manipulate our arrays\n",
|
||||
"import numpy as np \n",
|
||||
"\n",
|
||||
"# read in test data protobuf files included with the model\n",
|
||||
"import onnx\n",
|
||||
"from onnx import numpy_helper\n",
|
||||
"\n",
|
||||
"# to use parsers to read in our model/data\n",
|
||||
"import json\n",
|
||||
"\n",
|
||||
"test_inputs = []\n",
|
||||
"test_outputs = []\n",
|
||||
"\n",
|
||||
"# read in 1 testing images from .pb files\n",
|
||||
"test_data_size = 1\n",
|
||||
"\n",
|
||||
"for num in np.arange(test_data_size):\n",
|
||||
" input_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(num), 'input_0.pb')\n",
|
||||
" output_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(num), 'output_0.pb')\n",
|
||||
" \n",
|
||||
" # convert protobuf tensors to np arrays using the TensorProto reader from ONNX\n",
|
||||
" tensor = onnx.TensorProto()\n",
|
||||
" with open(input_test_data, 'rb') as f:\n",
|
||||
" tensor.ParseFromString(f.read())\n",
|
||||
" \n",
|
||||
" input_data = numpy_helper.to_array(tensor)\n",
|
||||
" test_inputs.append(input_data)\n",
|
||||
" \n",
|
||||
" with open(output_test_data, 'rb') as f:\n",
|
||||
" tensor.ParseFromString(f.read())\n",
|
||||
" \n",
|
||||
" output_data = numpy_helper.to_array(tensor)\n",
|
||||
" output_processed = emotion_map(postprocess(output_data[0]))[0]\n",
|
||||
" test_outputs.append(output_processed)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"nbpresent": {
|
||||
"id": "c3f2f57c-7454-4d3e-b38d-b0946cf066ea"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Show some sample images\n",
|
||||
"We use `matplotlib` to plot 1 test images from the dataset."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"nbpresent": {
|
||||
"id": "396d478b-34aa-4afa-9898-cdce8222a516"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"plt.figure(figsize = (20, 20))\n",
|
||||
"for test_image in np.arange(test_data_size):\n",
|
||||
" test_inputs[test_image].reshape(1, 64, 64)\n",
|
||||
" plt.subplot(1, 8, test_image+1)\n",
|
||||
" plt.axhline('')\n",
|
||||
" plt.axvline('')\n",
|
||||
" plt.text(x = 10, y = -10, s = test_outputs[test_image], fontsize = 18)\n",
|
||||
" plt.imshow(test_inputs[test_image].reshape(64, 64), cmap = plt.cm.gray)\n",
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Run evaluation / prediction"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"plt.figure(figsize = (16, 6))\n",
|
||||
"plt.subplot(1, 8, 1)\n",
|
||||
"\n",
|
||||
"plt.text(x = 0, y = -30, s = \"True Label: \", fontsize = 13, color = 'black')\n",
|
||||
"plt.text(x = 0, y = -20, s = \"Result: \", fontsize = 13, color = 'black')\n",
|
||||
"plt.text(x = 0, y = -10, s = \"Inference Time: \", fontsize = 13, color = 'black')\n",
|
||||
"plt.text(x = 3, y = 14, s = \"Model Input\", fontsize = 12, color = 'black')\n",
|
||||
"plt.text(x = 6, y = 18, s = \"(64 x 64)\", fontsize = 12, color = 'black')\n",
|
||||
"plt.imshow(np.ones((28,28)), cmap=plt.cm.Greys) \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"for i in np.arange(test_data_size):\n",
|
||||
" \n",
|
||||
" input_data = json.dumps({'data': test_inputs[i].tolist()})\n",
|
||||
"\n",
|
||||
" # predict using the deployed model\n",
|
||||
" r = json.loads(aci_service.run(input_data))\n",
|
||||
" \n",
|
||||
" if \"error\" in r:\n",
|
||||
" print(r['error'])\n",
|
||||
" break\n",
|
||||
" \n",
|
||||
" result = r['result'][0]\n",
|
||||
" time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n",
|
||||
" \n",
|
||||
" ground_truth = test_outputs[i]\n",
|
||||
" \n",
|
||||
" # compare actual value vs. the predicted values:\n",
|
||||
" plt.subplot(1, 8, i+2)\n",
|
||||
" plt.axhline('')\n",
|
||||
" plt.axvline('')\n",
|
||||
"\n",
|
||||
" # use different color for misclassified sample\n",
|
||||
" font_color = 'red' if ground_truth != result else 'black'\n",
|
||||
" clr_map = plt.cm.Greys if ground_truth != result else plt.cm.gray\n",
|
||||
"\n",
|
||||
" # ground truth labels are in blue\n",
|
||||
" plt.text(x = 10, y = -70, s = ground_truth, fontsize = 18, color = 'blue')\n",
|
||||
" \n",
|
||||
" # predictions are in black if correct, red if incorrect\n",
|
||||
" plt.text(x = 10, y = -45, s = result, fontsize = 18, color = font_color)\n",
|
||||
" plt.text(x = 5, y = -22, s = str(time_ms) + ' ms', fontsize = 14, color = font_color)\n",
|
||||
"\n",
|
||||
" \n",
|
||||
" plt.imshow(test_inputs[i].reshape(64, 64), cmap = clr_map)\n",
|
||||
"\n",
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Try classifying your own images!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Preprocessing functions take your image and format it so it can be passed\n",
|
||||
"# as input into our ONNX model\n",
|
||||
"\n",
|
||||
"import cv2\n",
|
||||
"\n",
|
||||
"def rgb2gray(rgb):\n",
|
||||
" \"\"\"Convert the input image into grayscale\"\"\"\n",
|
||||
" return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n",
|
||||
"\n",
|
||||
"def resize_img(img_to_resize):\n",
|
||||
" \"\"\"Resize image to FER+ model input dimensions\"\"\"\n",
|
||||
" r_img = cv2.resize(img_to_resize, dsize=(64, 64), interpolation=cv2.INTER_AREA)\n",
|
||||
" r_img.resize((1, 1, 64, 64))\n",
|
||||
" return r_img\n",
|
||||
"\n",
|
||||
"def preprocess(img_to_preprocess):\n",
|
||||
" \"\"\"Resize input images and convert them to grayscale.\"\"\"\n",
|
||||
" if img_to_preprocess.shape == (64, 64):\n",
|
||||
" img_to_preprocess.resize((1, 1, 64, 64))\n",
|
||||
" return img_to_preprocess\n",
|
||||
" \n",
|
||||
" grayscale = rgb2gray(img_to_preprocess)\n",
|
||||
" processed_img = resize_img(grayscale)\n",
|
||||
" return processed_img"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Replace the following string with your own path/test image\n",
|
||||
"# Make sure your image is square and the dimensions are equal (i.e. 100 * 100 pixels or 28 * 28 pixels)\n",
|
||||
"\n",
|
||||
"# Any PNG or JPG image file should work\n",
|
||||
"# Make sure to include the entire path with // instead of /\n",
|
||||
"\n",
|
||||
"# e.g. your_test_image = \"C:/Users/vinitra.swamy/Pictures/face.png\"\n",
|
||||
"\n",
|
||||
"your_test_image = \"<path to file>\"\n",
|
||||
"\n",
|
||||
"import matplotlib.image as mpimg\n",
|
||||
"\n",
|
||||
"if your_test_image != \"<path to file>\":\n",
|
||||
" img = mpimg.imread(your_test_image)\n",
|
||||
" plt.subplot(1,3,1)\n",
|
||||
" plt.imshow(img, cmap = plt.cm.Greys)\n",
|
||||
" print(\"Old Dimensions: \", img.shape)\n",
|
||||
" img = preprocess(img)\n",
|
||||
" print(\"New Dimensions: \", img.shape)\n",
|
||||
"else:\n",
|
||||
" img = None"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if img is None:\n",
|
||||
" print(\"Add the path for your image data.\")\n",
|
||||
"else:\n",
|
||||
" input_data = json.dumps({'data': img.tolist()})\n",
|
||||
"\n",
|
||||
" try:\n",
|
||||
" r = json.loads(aci_service.run(input_data))\n",
|
||||
" result = r['result'][0]\n",
|
||||
" time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n",
|
||||
" except KeyError as e:\n",
|
||||
" print(str(e))\n",
|
||||
"\n",
|
||||
" plt.figure(figsize = (16, 6))\n",
|
||||
" plt.subplot(1,8,1)\n",
|
||||
" plt.axhline('')\n",
|
||||
" plt.axvline('')\n",
|
||||
" plt.text(x = -10, y = -40, s = \"Model prediction: \", fontsize = 14)\n",
|
||||
" plt.text(x = -10, y = -25, s = \"Inference time: \", fontsize = 14)\n",
|
||||
" plt.text(x = 100, y = -40, s = str(result), fontsize = 14)\n",
|
||||
" plt.text(x = 100, y = -25, s = str(time_ms) + \" ms\", fontsize = 14)\n",
|
||||
" plt.text(x = -10, y = -10, s = \"Model Input image: \", fontsize = 14)\n",
|
||||
" plt.imshow(img.reshape((64, 64)), cmap = plt.cm.gray) \n",
|
||||
" "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# remember to delete your service after you are done using it!\n",
|
||||
"\n",
|
||||
"aci_service.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"Congratulations!\n",
|
||||
"\n",
|
||||
"In this tutorial, you have:\n",
|
||||
"- familiarized yourself with ONNX Runtime inference and the pretrained models in the ONNX model zoo\n",
|
||||
"- understood a state-of-the-art convolutional neural net image classification model (FER+ in ONNX) and deployed it in the Azure ML cloud\n",
|
||||
"- ensured that your deep learning model is working perfectly (in the cloud) on test data, and checked it against some of your own!\n",
|
||||
"\n",
|
||||
"Next steps:\n",
|
||||
"- If you have not already, check out another interesting ONNX/AML application that lets you set up a state-of-the-art [handwritten image classification model (MNIST)](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model for handwritten digit classification in an Azure ML virtual machine.\n",
|
||||
"- Keep an eye out for an updated version of this tutorial that uses ONNX Runtime GPU.\n",
|
||||
"- Contribute to our [open source ONNX repository on github](http://github.com/onnx/onnx) and/or add to our [ONNX model zoo](http://github.com/onnx/models)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "viswamy"
|
||||
}
|
||||
],
|
||||
"category": "deployment",
|
||||
"compute": [
|
||||
"Local"
|
||||
],
|
||||
"datasets": [
|
||||
"Emotion FER"
|
||||
],
|
||||
"deployment": [
|
||||
"Azure Container Instance"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"framework": [
|
||||
"ONNX"
|
||||
],
|
||||
"friendly_name": "Deploy Facial Expression Recognition (FER+) with ONNX Runtime",
|
||||
"index_order": 2,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.5"
|
||||
},
|
||||
"msauthor": "vinitra.swamy",
|
||||
"star_tag": [],
|
||||
"tags": [
|
||||
"ONNX Model Zoo"
|
||||
],
|
||||
"task": "Facial Expression Recognition"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
name: onnx-inference-facial-expression-recognition-deploy
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
- numpy
|
||||
- onnx<1.7.0
|
||||
- opencv-python-headless
|
||||
@@ -1,778 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Handwritten Digit Classification (MNIST) using ONNX Runtime on Azure ML\n",
|
||||
"\n",
|
||||
"This example shows how to deploy an image classification neural network using the Modified National Institute of Standards and Technology ([MNIST](http://yann.lecun.com/exdb/mnist/)) dataset and Open Neural Network eXchange format ([ONNX](http://aka.ms/onnxdocarticle)) on the Azure Machine Learning platform. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing number from 0 to 9. This tutorial will show you how to deploy a MNIST model from the [ONNX model zoo](https://github.com/onnx/models), use it to make predictions using ONNX Runtime Inference, and deploy it as a web service in Azure.\n",
|
||||
"\n",
|
||||
"Throughout this tutorial, we will be referring to ONNX, a neural network exchange format used to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools (CNTK, PyTorch, Caffe, MXNet, TensorFlow) and choose the combination that is best for them. ONNX is developed and supported by a community of partners including Microsoft AI, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai) and [open source files](https://github.com/onnx).\n",
|
||||
"\n",
|
||||
"[ONNX Runtime](https://aka.ms/onnxruntime-python) is the runtime engine that enables evaluation of trained machine learning (Traditional ML and Deep Learning) models with high performance and low resource utilization.\n",
|
||||
"\n",
|
||||
"#### Tutorial Objectives:\n",
|
||||
"\n",
|
||||
"- Describe the MNIST dataset and pretrained Convolutional Neural Net ONNX model, stored in the ONNX model zoo.\n",
|
||||
"- Deploy and run the pretrained MNIST ONNX model on an Azure Machine Learning instance\n",
|
||||
"- Predict labels for test set data points in the cloud using ONNX Runtime and Azure ML"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"### 1. Install Azure ML SDK and create a new workspace\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
|
||||
"\n",
|
||||
"### 2. Install additional packages needed for this tutorial notebook\n",
|
||||
"You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed. \n",
|
||||
"\n",
|
||||
"```sh\n",
|
||||
"(myenv) $ pip install matplotlib onnx opencv-python\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"**Debugging tip**: Make sure that you run the \"jupyter notebook\" command to launch this notebook after activating your virtual environment. Choose the respective Python kernel for your new virtual environment using the `Kernel > Change Kernel` menu above. If you have completed the steps correctly, the upper right corner of your screen should state `Python [conda env:myenv]` instead of `Python [default]`.\n",
|
||||
"\n",
|
||||
"### 3. Download sample data and pre-trained ONNX model from ONNX Model Zoo.\n",
|
||||
"\n",
|
||||
"In the following lines of code, we download [the trained ONNX MNIST model and corresponding test data](https://github.com/onnx/models/tree/master/vision/classification/mnist) and place them in the same folder as this tutorial notebook. For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# urllib is a built-in Python library to download files from URLs\n",
|
||||
"\n",
|
||||
"# Objective: retrieve the latest version of the ONNX MNIST model files from the\n",
|
||||
"# ONNX Model Zoo and save it in the same folder as this tutorial\n",
|
||||
"\n",
|
||||
"import urllib.request\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"onnx_model_url = \"https://github.com/onnx/models/blob/main/vision/classification/mnist/model/mnist-7.tar.gz?raw=true\"\n",
|
||||
"\n",
|
||||
"urllib.request.urlretrieve(onnx_model_url, filename=\"mnist-7.tar.gz\")\n",
|
||||
"os.mkdir(\"mnist\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# the ! magic command tells our jupyter notebook kernel to run the following line of \n",
|
||||
"# code from the command line instead of the notebook kernel\n",
|
||||
"\n",
|
||||
"# We use tar and xvcf to unzip the files we just retrieved from the ONNX model zoo\n",
|
||||
"\n",
|
||||
"!tar xvzf mnist-7.tar.gz -C mnist"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy a VM with your ONNX model in the Cloud\n",
|
||||
"\n",
|
||||
"### Load Azure ML workspace\n",
|
||||
"\n",
|
||||
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Registering your model with Azure ML"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model_dir = \"mnist/model\" # replace this with the location of your model files\n",
|
||||
"\n",
|
||||
"# leave as is if it's in the same folder as this notebook"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"model = Model.register(workspace = ws,\n",
|
||||
" model_path = model_dir + \"/\" + \"model.onnx\",\n",
|
||||
" model_name = \"mnist_1\",\n",
|
||||
" tags = {\"onnx\": \"demo\"},\n",
|
||||
" description = \"MNIST image classification CNN from ONNX Model Zoo\",)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Optional: Displaying your registered models\n",
|
||||
"\n",
|
||||
"This step is not required, so feel free to skip it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"models = ws.models\n",
|
||||
"for name, m in models.items():\n",
|
||||
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"nbpresent": {
|
||||
"id": "c3f2f57c-7454-4d3e-b38d-b0946cf066ea"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### ONNX MNIST Model Methodology\n",
|
||||
"\n",
|
||||
"The image classification model we are using is pre-trained using Microsoft's deep learning cognitive toolkit, [CNTK](https://github.com/Microsoft/CNTK), from the [ONNX model zoo](http://github.com/onnx/models). The model zoo has many other models that can be deployed on cloud providers like AzureML without any additional training. To ensure that our cloud deployed model works, we use testing data from the famous MNIST data set, provided as part of the [trained MNIST model](https://github.com/onnx/models/tree/master/vision/classification/mnist) in the ONNX model zoo.\n",
|
||||
"\n",
|
||||
"***Input: Handwritten Images from MNIST Dataset***\n",
|
||||
"\n",
|
||||
"***Task: Classify each MNIST image into an appropriate digit***\n",
|
||||
"\n",
|
||||
"***Output: Digit prediction for input image***\n",
|
||||
"\n",
|
||||
"Run the cell below to look at some of the sample images from the MNIST dataset that we used to train this ONNX model. Remember, once the application is deployed in Azure ML, you can use your own images as input for the model to classify!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# for images and plots in this notebook\n",
|
||||
"import matplotlib.pyplot as plt \n",
|
||||
"from IPython.display import Image\n",
|
||||
"\n",
|
||||
"# display images inline\n",
|
||||
"%matplotlib inline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"Image(url=\"http://3.bp.blogspot.com/_UpN7DfJA0j4/TJtUBWPk0SI/AAAAAAAAABY/oWPMtmqJn3k/s1600/mnist_originals.png\", width=200, height=200)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Specify our Score and Environment Files"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We are now going to deploy our ONNX Model on AML with inference in ONNX Runtime. We begin by writing a score.py file, which will help us run the model in our Azure ML virtual machine (VM), and then specify our environment by writing a yml file. You will also notice that we import the onnxruntime library to do runtime inference on our ONNX models (passing in input and evaluating out model's predicted output). More information on the API and commands can be found in the [ONNX Runtime documentation](https://aka.ms/onnxruntime).\n",
|
||||
"\n",
|
||||
"### Write Score File\n",
|
||||
"\n",
|
||||
"A score file is what tells our Azure cloud service what to do. After initializing our model using azureml.core.model, we start an ONNX Runtime inference session to evaluate the data passed in on our function calls."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"import onnxruntime\n",
|
||||
"import sys\n",
|
||||
"import os\n",
|
||||
"import time\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global session, input_name, output_name\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model.onnx')\n",
|
||||
" session = onnxruntime.InferenceSession(model, None)\n",
|
||||
" input_name = session.get_inputs()[0].name\n",
|
||||
" output_name = session.get_outputs()[0].name \n",
|
||||
" \n",
|
||||
"\n",
|
||||
"def preprocess(input_data_json):\n",
|
||||
" # convert the JSON data into the tensor input\n",
|
||||
" return np.array(json.loads(input_data_json)['data']).astype('float32')\n",
|
||||
"\n",
|
||||
"def postprocess(result):\n",
|
||||
" # We use argmax to pick the highest confidence label\n",
|
||||
" return int(np.argmax(np.array(result).squeeze(), axis=0))\n",
|
||||
" \n",
|
||||
"def run(input_data):\n",
|
||||
"\n",
|
||||
" try:\n",
|
||||
" # load in our data, convert to readable format\n",
|
||||
" data = preprocess(input_data)\n",
|
||||
" \n",
|
||||
" # start timer\n",
|
||||
" start = time.time()\n",
|
||||
" \n",
|
||||
" r = session.run([output_name], {input_name: data})\n",
|
||||
" \n",
|
||||
" #end timer\n",
|
||||
" end = time.time()\n",
|
||||
" \n",
|
||||
" result = postprocess(r)\n",
|
||||
" result_dict = {\"result\": result,\n",
|
||||
" \"time_in_sec\": end - start}\n",
|
||||
" except Exception as e:\n",
|
||||
" result_dict = {\"error\": str(e)}\n",
|
||||
" \n",
|
||||
" return result_dict\n",
|
||||
"\n",
|
||||
"def choose_class(result_prob):\n",
|
||||
" \"\"\"We use argmax to determine the right label to choose from our output\"\"\"\n",
|
||||
" return int(np.argmax(result_prob, axis=0))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Write Environment File\n",
|
||||
"\n",
|
||||
"This step creates a YAML environment file that specifies which dependencies we would like to see in our Linux Virtual Machine. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
||||
"\n",
|
||||
"myenv = CondaDependencies.create(pip_packages=[\"numpy\", \"onnxruntime==1.15.1\", \"azureml-core\", \"azureml-defaults\"])\n",
|
||||
"\n",
|
||||
"with open(\"myenv.yml\",\"w\") as f:\n",
|
||||
" f.write(myenv.serialize_to_string())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create Inference Configuration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deploy the model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import AciWebservice\n",
|
||||
"\n",
|
||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||
" memory_gb = 1, \n",
|
||||
" tags = {'demo': 'onnx'}, \n",
|
||||
" description = 'ONNX for mnist model')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following cell will likely take a few minutes to run."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aci_service_name = 'onnx-demo-mnist'\n",
|
||||
"print(\"Service\", aci_service_name)\n",
|
||||
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||
"aci_service.wait_for_deployment(True)\n",
|
||||
"print(aci_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if aci_service.state != 'Healthy':\n",
|
||||
" # run this command for debugging.\n",
|
||||
" print(aci_service.get_logs())\n",
|
||||
"\n",
|
||||
" # If your deployment fails, make sure to delete your aci_service or rename your service before trying again!\n",
|
||||
" # aci_service.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Success!\n",
|
||||
"\n",
|
||||
"If you've made it this far, you've deployed a working VM with a handwritten digit classifier running in the cloud using Azure ML. Congratulations!\n",
|
||||
"\n",
|
||||
"You can get the URL for the webservice with the code below. Let's now see how well our model deals with our test images."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(aci_service.scoring_uri)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Testing and Evaluation\n",
|
||||
"\n",
|
||||
"### Load Test Data\n",
|
||||
"\n",
|
||||
"These are already in your directory from your ONNX model download (from the model zoo).\n",
|
||||
"\n",
|
||||
"Notice that our Model Zoo files have a .pb extension. This is because they are [protobuf files (Protocol Buffers)](https://developers.google.com/protocol-buffers/docs/pythontutorial), so we need to read in our data through our ONNX TensorProto reader into a format we can work with, like numerical arrays."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# to manipulate our arrays\n",
|
||||
"import numpy as np \n",
|
||||
"\n",
|
||||
"# read in test data protobuf files included with the model\n",
|
||||
"import onnx\n",
|
||||
"from onnx import numpy_helper\n",
|
||||
"\n",
|
||||
"# to use parsers to read in our model/data\n",
|
||||
"import json\n",
|
||||
"\n",
|
||||
"test_inputs = []\n",
|
||||
"test_outputs = []\n",
|
||||
"\n",
|
||||
"# read in 1 testing images from .pb files\n",
|
||||
"test_data_size = 1\n",
|
||||
"\n",
|
||||
"for i in np.arange(test_data_size):\n",
|
||||
" input_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(i), 'input_0.pb')\n",
|
||||
" output_test_data = os.path.join(model_dir, 'test_data_set_{0}'.format(i), 'output_0.pb')\n",
|
||||
" \n",
|
||||
" # convert protobuf tensors to np arrays using the TensorProto reader from ONNX\n",
|
||||
" tensor = onnx.TensorProto()\n",
|
||||
" with open(input_test_data, 'rb') as f:\n",
|
||||
" tensor.ParseFromString(f.read())\n",
|
||||
" \n",
|
||||
" input_data = numpy_helper.to_array(tensor)\n",
|
||||
" test_inputs.append(input_data)\n",
|
||||
" \n",
|
||||
" with open(output_test_data, 'rb') as f:\n",
|
||||
" tensor.ParseFromString(f.read())\n",
|
||||
" \n",
|
||||
" output_data = numpy_helper.to_array(tensor)\n",
|
||||
" test_outputs.append(output_data)\n",
|
||||
" \n",
|
||||
"if len(test_inputs) == test_data_size:\n",
|
||||
" print('Test data loaded successfully.')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"nbpresent": {
|
||||
"id": "c3f2f57c-7454-4d3e-b38d-b0946cf066ea"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Show some sample images\n",
|
||||
"We use `matplotlib` to plot 1 test images from the dataset."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"nbpresent": {
|
||||
"id": "396d478b-34aa-4afa-9898-cdce8222a516"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"plt.figure(figsize = (16, 6))\n",
|
||||
"for test_image in np.arange(test_data_size):\n",
|
||||
" plt.subplot(1, 15, test_image+1)\n",
|
||||
" plt.axhline('')\n",
|
||||
" plt.axvline('')\n",
|
||||
" plt.imshow(test_inputs[test_image].reshape(28, 28), cmap = plt.cm.Greys)\n",
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Run evaluation / prediction"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"plt.figure(figsize = (16, 6))\n",
|
||||
"plt.subplot(1, 8, 1)\n",
|
||||
"\n",
|
||||
"plt.text(x = 0, y = -30, s = \"True Label: \", fontsize = 13, color = 'black')\n",
|
||||
"plt.text(x = 0, y = -20, s = \"Result: \", fontsize = 13, color = 'black')\n",
|
||||
"plt.text(x = 0, y = -10, s = \"Inference Time: \", fontsize = 13, color = 'black')\n",
|
||||
"plt.text(x = 3, y = 14, s = \"Model Input\", fontsize = 12, color = 'black')\n",
|
||||
"plt.text(x = 6, y = 18, s = \"(28 x 28)\", fontsize = 12, color = 'black')\n",
|
||||
"plt.imshow(np.ones((28,28)), cmap=plt.cm.Greys) \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"for i in np.arange(test_data_size):\n",
|
||||
" \n",
|
||||
" input_data = json.dumps({'data': test_inputs[i].tolist()})\n",
|
||||
" \n",
|
||||
" # predict using the deployed model\n",
|
||||
" r = aci_service.run(input_data)\n",
|
||||
" \n",
|
||||
" if \"error\" in r:\n",
|
||||
" print(r['error'])\n",
|
||||
" break\n",
|
||||
" \n",
|
||||
" result = r['result']\n",
|
||||
" time_ms = np.round(r['time_in_sec'] * 1000, 2)\n",
|
||||
" \n",
|
||||
" ground_truth = int(np.argmax(test_outputs[i]))\n",
|
||||
" \n",
|
||||
" # compare actual value vs. the predicted values:\n",
|
||||
" plt.subplot(1, 8, i+2)\n",
|
||||
" plt.axhline('')\n",
|
||||
" plt.axvline('')\n",
|
||||
"\n",
|
||||
" # use different color for misclassified sample\n",
|
||||
" font_color = 'red' if ground_truth != result else 'black'\n",
|
||||
" clr_map = plt.cm.gray if ground_truth != result else plt.cm.Greys\n",
|
||||
"\n",
|
||||
" # ground truth labels are in blue\n",
|
||||
" plt.text(x = 10, y = -30, s = ground_truth, fontsize = 18, color = 'blue')\n",
|
||||
" \n",
|
||||
" # predictions are in black if correct, red if incorrect\n",
|
||||
" plt.text(x = 10, y = -20, s = result, fontsize = 18, color = font_color)\n",
|
||||
" plt.text(x = 5, y = -10, s = str(time_ms) + ' ms', fontsize = 14, color = font_color)\n",
|
||||
"\n",
|
||||
" \n",
|
||||
" plt.imshow(test_inputs[i].reshape(28, 28), cmap = clr_map)\n",
|
||||
"\n",
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Try classifying your own images!\n",
|
||||
"\n",
|
||||
"Create your own handwritten image and pass it into the model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Preprocessing functions take your image and format it so it can be passed\n",
|
||||
"# as input into our ONNX model\n",
|
||||
"\n",
|
||||
"import cv2\n",
|
||||
"\n",
|
||||
"def rgb2gray(rgb):\n",
|
||||
" \"\"\"Convert the input image into grayscale\"\"\"\n",
|
||||
" return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n",
|
||||
"\n",
|
||||
"def resize_img(img_to_resize):\n",
|
||||
" \"\"\"Resize image to MNIST model input dimensions\"\"\"\n",
|
||||
" r_img = cv2.resize(img_to_resize, dsize=(28, 28), interpolation=cv2.INTER_AREA)\n",
|
||||
" r_img.resize((1, 1, 28, 28))\n",
|
||||
" return r_img\n",
|
||||
"\n",
|
||||
"def preprocess(img_to_preprocess):\n",
|
||||
" \"\"\"Resize input images and convert them to grayscale.\"\"\"\n",
|
||||
" if img_to_preprocess.shape == (28, 28):\n",
|
||||
" img_to_preprocess.resize((1, 1, 28, 28))\n",
|
||||
" return img_to_preprocess\n",
|
||||
" \n",
|
||||
" grayscale = rgb2gray(img_to_preprocess)\n",
|
||||
" processed_img = resize_img(grayscale)\n",
|
||||
" return processed_img"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Replace this string with your own path/test image\n",
|
||||
"# Make sure your image is square and the dimensions are equal (i.e. 100 * 100 pixels or 28 * 28 pixels)\n",
|
||||
"\n",
|
||||
"# Any PNG or JPG image file should work\n",
|
||||
"\n",
|
||||
"your_test_image = \"<path to file>\"\n",
|
||||
"\n",
|
||||
"# e.g. your_test_image = \"C:/Users/vinitra.swamy/Pictures/handwritten_digit.png\"\n",
|
||||
"\n",
|
||||
"import matplotlib.image as mpimg\n",
|
||||
"\n",
|
||||
"if your_test_image != \"<path to file>\":\n",
|
||||
" img = mpimg.imread(your_test_image)\n",
|
||||
" plt.subplot(1,3,1)\n",
|
||||
" plt.imshow(img, cmap = plt.cm.Greys)\n",
|
||||
" print(\"Old Dimensions: \", img.shape)\n",
|
||||
" img = preprocess(img)\n",
|
||||
" print(\"New Dimensions: \", img.shape)\n",
|
||||
"else:\n",
|
||||
" img = None"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if img is None:\n",
|
||||
" print(\"Add the path for your image data.\")\n",
|
||||
"else:\n",
|
||||
" input_data = json.dumps({'data': img.tolist()})\n",
|
||||
"\n",
|
||||
" try:\n",
|
||||
" r = aci_service.run(input_data)\n",
|
||||
" result = r['result']\n",
|
||||
" time_ms = np.round(r['time_in_sec'] * 1000, 2)\n",
|
||||
" except KeyError as e:\n",
|
||||
" print(str(e))\n",
|
||||
"\n",
|
||||
" plt.figure(figsize = (16, 6))\n",
|
||||
" plt.subplot(1, 15,1)\n",
|
||||
" plt.axhline('')\n",
|
||||
" plt.axvline('')\n",
|
||||
" plt.text(x = -100, y = -20, s = \"Model prediction: \", fontsize = 14)\n",
|
||||
" plt.text(x = -100, y = -10, s = \"Inference time: \", fontsize = 14)\n",
|
||||
" plt.text(x = 0, y = -20, s = str(result), fontsize = 14)\n",
|
||||
" plt.text(x = 0, y = -10, s = str(time_ms) + \" ms\", fontsize = 14)\n",
|
||||
" plt.text(x = -100, y = 14, s = \"Input image: \", fontsize = 14)\n",
|
||||
" plt.imshow(img.reshape(28, 28), cmap = plt.cm.gray) "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Optional: How does our ONNX MNIST model work? \n",
|
||||
"#### A brief explanation of Convolutional Neural Networks\n",
|
||||
"\n",
|
||||
"A [convolutional neural network](https://en.wikipedia.org/wiki/Convolutional_neural_network) (CNN, or ConvNet) is a type of [feed-forward](https://en.wikipedia.org/wiki/Feedforward_neural_network) artificial neural network made up of neurons that have learnable weights and biases. The CNNs take advantage of the spatial nature of the data. In nature, we perceive different objects by their shapes, size and colors. For example, objects in a natural scene are typically edges, corners/vertices (defined by two of more edges), color patches etc. These primitives are often identified using different detectors (e.g., edge detection, color detector) or combination of detectors interacting to facilitate image interpretation (object classification, region of interest detection, scene description etc.) in real world vision related tasks. These detectors are also known as filters. Convolution is a mathematical operator that takes an image and a filter as input and produces a filtered output (representing say edges, corners, or colors in the input image). \n",
|
||||
"\n",
|
||||
"Historically, these filters are a set of weights that were often hand crafted or modeled with mathematical functions (e.g., [Gaussian](https://en.wikipedia.org/wiki/Gaussian_filter) / [Laplacian](http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm) / [Canny](https://en.wikipedia.org/wiki/Canny_edge_detector) filter). The filter outputs are mapped through non-linear activation functions mimicking human brain cells called [neurons](https://en.wikipedia.org/wiki/Neuron). Popular deep CNNs or ConvNets (such as [AlexNet](https://en.wikipedia.org/wiki/AlexNet), [VGG](https://arxiv.org/abs/1409.1556), [Inception](http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf), [ResNet](https://arxiv.org/pdf/1512.03385v1.pdf)) that are used for various [computer vision](https://en.wikipedia.org/wiki/Computer_vision) tasks have many of these architectural primitives (inspired from biology). \n",
|
||||
"\n",
|
||||
"### Convolution Layer\n",
|
||||
"\n",
|
||||
"A convolution layer is a set of filters. Each filter is defined by a weight (**W**) matrix, and bias ($b$).\n",
|
||||
"\n",
|
||||
"These filters are scanned across the image performing the dot product between the weights and corresponding input value ($x$). The bias value is added to the output of the dot product and the resulting sum is optionally mapped through an activation function."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Model Description\n",
|
||||
"\n",
|
||||
"The MNIST model from the ONNX Model Zoo uses maxpooling to update the weights in its convolutions, summarized by the graphic below. You can see the entire workflow of our pre-trained model in the following image, with our input images and our output probabilities of each of our 10 labels. If you're interested in exploring the logic behind creating a Deep Learning model further, please look at the [training tutorial for our ONNX MNIST Convolutional Neural Network](https://github.com/Microsoft/CNTK/blob/master/Tutorials/CNTK_103D_MNIST_ConvolutionalNeuralNetwork.ipynb). "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# remember to delete your service after you are done using it!\n",
|
||||
"\n",
|
||||
"aci_service.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Conclusion\n",
|
||||
"\n",
|
||||
"Congratulations!\n",
|
||||
"\n",
|
||||
"In this tutorial, you have:\n",
|
||||
"- familiarized yourself with ONNX Runtime inference and the pretrained models in the ONNX model zoo\n",
|
||||
"- understood a state-of-the-art convolutional neural net image classification model (MNIST in ONNX) and deployed it in Azure ML cloud\n",
|
||||
"- ensured that your deep learning model is working perfectly (in the cloud) on test data, and checked it against some of your own!\n",
|
||||
"\n",
|
||||
"Next steps:\n",
|
||||
"- Check out another interesting application based on a Microsoft Research computer vision paper that lets you set up a [facial emotion recognition model](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model in an Azure ML virtual machine.\n",
|
||||
"- Contribute to our [open source ONNX repository on github](http://github.com/onnx/onnx) and/or add to our [ONNX model zoo](http://github.com/onnx/models)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "viswamy"
|
||||
}
|
||||
],
|
||||
"category": "deployment",
|
||||
"compute": [
|
||||
"Local"
|
||||
],
|
||||
"datasets": [
|
||||
"MNIST"
|
||||
],
|
||||
"deployment": [
|
||||
"Azure Container Instance"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"framework": [
|
||||
"ONNX"
|
||||
],
|
||||
"friendly_name": "Deploy MNIST digit recognition with ONNX Runtime",
|
||||
"index_order": 1,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.5"
|
||||
},
|
||||
"msauthor": "vinitra.swamy",
|
||||
"star_tag": [],
|
||||
"tags": [
|
||||
"ONNX Model Zoo"
|
||||
],
|
||||
"task": "Image Classification"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
name: onnx-inference-mnist-deploy
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
- numpy
|
||||
- onnx<1.7.0
|
||||
- opencv-python-headless
|
||||
@@ -1 +0,0 @@
|
||||
{"inputs": {"Input3": {"dims": ["1", "1", "28", "28"], "dataType": 1, "rawData": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAPwAAQEAAAAAAAAAAAAAAgEAAAABAAAAAAAAAMEEAAAAAAAAAAAAAYEEAAIA/AAAAAAAAmEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQEEAAAAAAAAAAAAA4EAAAAAAAACAPwAAIEEAAAAAAAAAQAAAAEAAAIBBAAAAAAAAQEAAAEBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4EAAAABBAAAAAAAAAEEAAAAAAAAAAAAAAEEAAAAAAAAAAAAAmEEAAAAAAAAAAAAAgD8AAKhBAAAAAAAAgEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgD8AAAAAAAAAAAAAgD8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAMEEAAAAAAAAAAAAAIEEAAEBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQQQAAAAAAAHBBAAAgQQAA0EEAAAhCAACIQQAAmkIAADVDAAAyQwAADEIAAIBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAWQwAAfkMAAHpDAAB7QwAAc0MAAHxDAAB8QwAAf0MAADRCAADAQAAAAAAAAKBAAAAAAAAAEEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOBAAACQQgAATUMAAH9DAABuQwAAc0MAAH9DAAB+QwAAe0MAAHhDAABJQwAARkMAAGRCAAAAAAAAmEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAWkMAAH9DAABxQwAAf0MAAHlDAAB6QwAAe0MAAHpDAAB/QwAAf0MAAHJDAABgQwAAREIAAAAAAABAQQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgD8AAABAAABAQAAAAEAAAABAAACAPwAAAAAAAIJCAABkQwAAf0MAAH5DAAB0QwAA7kIAAAhCAAAkQgAA3EIAAHpDAAB/QwAAeEMAAPhCAACgQQAAAAAAAAAAAAAAAAAAAAAAAAAAAACAPwAAgD8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEBBAAAAAAAAeEIAAM5CAADiQgAA6kIAAAhCAAAAAAAAAAAAAAAAAABIQwAAdEMAAH9DAAB/QwAAAAAAAEBBAAAAAAAAAAAAAAAAAAAAAAAAAEAAAIA/AAAAAAAAAAAAAAAAAAAAAAAAgD8AAABAAAAAAAAAAAAAAABAAACAQAAAAAAAADBBAAAAAAAA4EAAAMBAAAAAAAAAlkIAAHRDAAB/QwAAf0MAAIBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIA/AAAAQAAAQEAAAIBAAACAQAAAAAAAAGBBAAAAAAAAAAAAAAAAAAAQQQAAAAAAAABAAAAAAAAAAAAAAAhCAAB/QwAAf0MAAH1DAAAgQQAAIEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIA/AAAAQAAAQEAAAABAAAAAAAAAAAAAAEBAAAAAQAAAAAAAAFBBAAAwQQAAAAAAAAAAAAAAAAAAwEAAAEBBAADGQgAAf0MAAH5DAAB4QwAAcEEAAEBBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIA/AACAPwAAgD8AAAAAAAAAAAAAAAAAAAAAAACAPwAAgD8AAAAAAAAAAAAAoEAAAMBAAAAwQQAAAAAAAAAAAACIQQAAOEMAAHdDAAB/QwAAc0MAAFBBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEBAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAABAAACAQAAAgEAAAAAAAAAwQQAAAAAAAExCAAC8QgAAqkIAAKBAAACgQAAAyEEAAHZDAAB2QwAAf0MAAFBDAAAAAAAAEEEAAAAAAAAAAAAAAAAAAAAAAACAQAAAgD8AAAAAAAAAAAAAgD8AAOBAAABwQQAAmEEAAMZCAADOQgAANkMAAD1DAABtQwAAfUMAAHxDAAA/QwAAPkMAAGNDAABzQwAAfEMAAFJDAACQQQAA4EAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIBAAAAAAAAAAAAAAABCAADaQgAAOUMAAHdDAAB/QwAAckMAAH9DAAB0QwAAf0MAAH9DAAByQwAAe0MAAH9DAABwQwAAf0MAAH9DAABaQwAA+EIAABBBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAD+QgAAf0MAAGtDAAB/QwAAf0MAAHdDAABlQwAAVEMAAHJDAAB6QwAAf0MAAH9DAAB4QwAAf0MAAH1DAAB5QwAAf0MAAHNDAAAqQwAAQEEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMEEAAAAAAAAQQQAAfUMAAH9DAAB/QwAAaUMAAEpDAACqQgAAAAAAAFRCAABEQwAAbkMAAH9DAABjQwAAbkMAAA5DAADaQgAAQUMAAH9DAABwQwAAf0MAADRDAAAAAAAAAAAAAAAAAAAAAAAAwEAAAAAAAACwQQAAgD8AAHVDAABzQwAAfkMAAH9DAABZQwAAa0MAAGJDAABVQwAAdEMAAHtDAAB/QwAAb0MAAJpCAAAAAAAAAAAAAKBBAAA2QwAAd0MAAG9DAABzQwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIBAAAAlQwAAe0MAAH9DAAB1QwAAf0MAAHJDAAB9QwAAekMAAH9DAABFQwAA1kIAAGxCAAAAAAAAkEEAAABAAADAQAAAAAAAAFhCAAB/QwAAHkMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwEEAAAAAAAAAAAAAwEAAAAhCAAAnQwAAQkMAADBDAAA3QwAAJEMAADBCAAAAQAAAIEEAAMBAAADAQAAAAAAAAAAAAACgQAAAAAAAAIA/AAAAAAAAYEEAAABAAAAAAAAAAAAAAAAAAAAAAAAAIEEAAAAAAABgQQAAAAAAAEBBAAAAAAAAoEAAAAAAAACAPwAAAAAAAMBAAAAAAAAA4EAAAAAAAAAAAAAAAAAAAABBAAAAAAAAIEEAAAAAAACgQAAAAAAAAAAAAAAgQQAAAAAAAAAAAAAAAAAAAAAAAAAAAABgQQAAAAAAAIBAAAAAAAAAAAAAAMhBAAAAAAAAAAAAABBBAAAAAAAAAAAAABBBAAAAAAAAMEEAAAAAAACAPwAAAAAAAAAAAAAAQAAAAAAAAAAAAADgQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=="}}, "outputFilter": ["Plus214_Output_0"]}
|
||||
@@ -1,228 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Register ONNX model and deploy as webservice\n",
|
||||
"\n",
|
||||
"Following this notebook, you will:\n",
|
||||
"\n",
|
||||
" - Learn how to register an ONNX in your Azure Machine Learning Workspace.\n",
|
||||
" - Deploy your model as a web service in an Azure Container Instance."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration notebook](../../../configuration.ipynb) to install the Azure Machine Learning Python SDK and create a workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"# Check core SDK version number.\n",
|
||||
"print('SDK version:', azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize workspace\n",
|
||||
"\n",
|
||||
"Create a [Workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace%28class%29?view=azure-ml-py) object from your persisted configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"create workspace"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Register model\n",
|
||||
"\n",
|
||||
"Register a file or folder as a model by calling [Model.register()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#register-workspace--model-path--model-name--tags-none--properties-none--description-none--datasets-none--model-framework-none--model-framework-version-none--child-paths-none-). For this example, we have provided a trained ONNX MNIST model(`mnist-model.onnx` in the notebook's directory).\n",
|
||||
"\n",
|
||||
"In addition to the content of the model file itself, your registered model will also store model metadata -- model description, tags, and framework information -- that will be useful when managing and deploying models in your workspace. Using tags, for instance, you can categorize your models and apply filters when listing models in your workspace. Also, marking this model with the scikit-learn framework will simplify deploying it as a web service, as we'll see later."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"register model from file"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Model\n",
|
||||
"\n",
|
||||
"model = Model.register(workspace=ws,\n",
|
||||
" model_name='mnist-sample', # Name of the registered model in your workspace.\n",
|
||||
" model_path='mnist-model.onnx', # Local ONNX model to upload and register as a model.\n",
|
||||
" model_framework=Model.Framework.ONNX , # Framework used to create the model.\n",
|
||||
" model_framework_version='1.3', # Version of ONNX used to create the model.\n",
|
||||
" description='Onnx MNIST model')\n",
|
||||
"\n",
|
||||
"print('Name:', model.name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy model\n",
|
||||
"\n",
|
||||
"Deploy your model as a web service using [Model.deploy()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py#deploy-workspace--name--models--inference-config--deployment-config-none--deployment-target-none-). Web services take one or more models, load them in an environment, and run them on one of several supported deployment targets.\n",
|
||||
"\n",
|
||||
"For this example, we will deploy the ONNX model to an Azure Container Instance (ACI)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Use a default environment (for supported models)\n",
|
||||
"\n",
|
||||
"The Azure Machine Learning service provides a default environment for supported model frameworks, including ONNX, based on the metadata you provided when registering your model. This is the easiest way to deploy your model.\n",
|
||||
"\n",
|
||||
"**Note**: This step can take several minutes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Webservice\n",
|
||||
"from azureml.exceptions import WebserviceException\n",
|
||||
"\n",
|
||||
"service_name = 'onnx-mnist-service'\n",
|
||||
"\n",
|
||||
"# Remove any existing service under the same name.\n",
|
||||
"try:\n",
|
||||
" Webservice(ws, service_name).delete()\n",
|
||||
"except WebserviceException:\n",
|
||||
" pass\n",
|
||||
"\n",
|
||||
"service = Model.deploy(ws, service_name, [model])\n",
|
||||
"service.wait_for_deployment(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"After your model is deployed, perform a call to the web service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}\n",
|
||||
"\n",
|
||||
"if service.auth_enabled:\n",
|
||||
" headers['Authorization'] = 'Bearer '+ service.get_keys()[0]\n",
|
||||
"elif service.token_auth_enabled:\n",
|
||||
" headers['Authorization'] = 'Bearer '+ service.get_token()[0]\n",
|
||||
"\n",
|
||||
"scoring_uri = service.scoring_uri\n",
|
||||
"print(scoring_uri)\n",
|
||||
"with open('onnx-mnist-predict-input.json', 'rb') as data_file:\n",
|
||||
" response = requests.post(\n",
|
||||
" scoring_uri, data=data_file, headers=headers)\n",
|
||||
"print(response.status_code)\n",
|
||||
"print(response.elapsed)\n",
|
||||
"print(response.json())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"When you are finished testing your service, clean up the deployment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"service.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "vaidyas"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
name: onnx-model-register-and-deploy
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
@@ -1,416 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ResNet50 Image Classification using ONNX and AzureML\n",
|
||||
"\n",
|
||||
"This example shows how to deploy the ResNet50 ONNX model as a web service using Azure Machine Learning services and the ONNX Runtime.\n",
|
||||
"\n",
|
||||
"## What is ONNX\n",
|
||||
"ONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai).\n",
|
||||
"\n",
|
||||
"## ResNet50 Details\n",
|
||||
"ResNet classifies the major object in an input image into a set of 1000 pre-defined classes. For more information about the ResNet50 model and how it was created can be found on the [ONNX Model Zoo github](https://github.com/onnx/models/tree/master/vision/classification/resnet). "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"To make the best use of your time, make sure you have done the following:\n",
|
||||
"\n",
|
||||
"* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n",
|
||||
"* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to:\n",
|
||||
" * install the AML SDK\n",
|
||||
" * create a workspace and its configuration file (config.json)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Download pre-trained ONNX model from ONNX Model Zoo.\n",
|
||||
"\n",
|
||||
"Download the [ResNet50v2 model and test data](https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz) and extract it in the same folder as this tutorial notebook.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import urllib.request\n",
|
||||
"\n",
|
||||
"onnx_model_url = \"https://s3.amazonaws.com/onnx-model-zoo/resnet/resnet50v2/resnet50v2.tar.gz\"\n",
|
||||
"urllib.request.urlretrieve(onnx_model_url, filename=\"resnet50v2.tar.gz\")\n",
|
||||
"\n",
|
||||
"!tar xvzf resnet50v2.tar.gz"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploying as a web service with Azure ML"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load your Azure ML workspace\n",
|
||||
"\n",
|
||||
"We begin by instantiating a workspace object from the existing workspace created earlier in the configuration notebook."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.location, ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Register your model with Azure ML\n",
|
||||
"\n",
|
||||
"Now we upload the model and register it in the workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"model = Model.register(model_path = \"resnet50v2/resnet50v2.onnx\",\n",
|
||||
" model_name = \"resnet50v2\",\n",
|
||||
" tags = {\"onnx\": \"demo\"},\n",
|
||||
" description = \"ResNet50v2 from ONNX Model Zoo\",\n",
|
||||
" workspace = ws)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Displaying your registered models\n",
|
||||
"\n",
|
||||
"You can optionally list out all the models that you have registered in this workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"models = ws.models\n",
|
||||
"for name, m in models.items():\n",
|
||||
" print(\"Name:\", name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Write scoring file\n",
|
||||
"\n",
|
||||
"We are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import json\n",
|
||||
"import time\n",
|
||||
"import sys\n",
|
||||
"import os\n",
|
||||
"import numpy as np # we're going to use numpy to process input and output data\n",
|
||||
"import onnxruntime # to inference ONNX models, we use the ONNX Runtime\n",
|
||||
"\n",
|
||||
"def softmax(x):\n",
|
||||
" x = x.reshape(-1)\n",
|
||||
" e_x = np.exp(x - np.max(x))\n",
|
||||
" return e_x / e_x.sum(axis=0)\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global session\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'resnet50v2.onnx')\n",
|
||||
" session = onnxruntime.InferenceSession(model, None)\n",
|
||||
"\n",
|
||||
"def preprocess(input_data_json):\n",
|
||||
" # convert the JSON data into the tensor input\n",
|
||||
" img_data = np.array(json.loads(input_data_json)['data']).astype('float32')\n",
|
||||
" \n",
|
||||
" #normalize\n",
|
||||
" mean_vec = np.array([0.485, 0.456, 0.406])\n",
|
||||
" stddev_vec = np.array([0.229, 0.224, 0.225])\n",
|
||||
" norm_img_data = np.zeros(img_data.shape).astype('float32')\n",
|
||||
" for i in range(img_data.shape[0]):\n",
|
||||
" norm_img_data[i,:,:] = (img_data[i,:,:]/255 - mean_vec[i]) / stddev_vec[i]\n",
|
||||
"\n",
|
||||
" return norm_img_data\n",
|
||||
"\n",
|
||||
"def postprocess(result):\n",
|
||||
" return softmax(np.array(result)).tolist()\n",
|
||||
"\n",
|
||||
"def run(input_data_json):\n",
|
||||
" try:\n",
|
||||
" start = time.time()\n",
|
||||
" # load in our data which is expected as NCHW 224x224 image\n",
|
||||
" input_data = preprocess(input_data_json)\n",
|
||||
" input_name = session.get_inputs()[0].name # get the id of the first input of the model \n",
|
||||
" result = session.run([], {input_name: input_data})\n",
|
||||
" end = time.time() # stop timer\n",
|
||||
" return {\"result\": postprocess(result),\n",
|
||||
" \"time\": end - start}\n",
|
||||
" except Exception as e:\n",
|
||||
" result = str(e)\n",
|
||||
" return {\"error\": result}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create inference configuration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First we create a YAML file that specifies which dependencies we would like to see in our container."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"myenv = CondaDependencies.create(pip_packages=[\"numpy\", \"onnxruntime==1.15.1\", \"azureml-core\", \"azureml-defaults\"])\n",
|
||||
"\n",
|
||||
"with open(\"myenv.yml\",\"w\") as f:\n",
|
||||
" f.write(myenv.serialize_to_string())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Create the inference configuration object. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"from azureml.core.environment import Environment\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deploy the model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import AciWebservice\n",
|
||||
"\n",
|
||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, \n",
|
||||
" memory_gb = 1, \n",
|
||||
" tags = {'demo': 'onnx'}, \n",
|
||||
" description = 'web service for ResNet50 ONNX model')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following cell will likely take a few minutes to run as well."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from random import randint\n",
|
||||
"\n",
|
||||
"aci_service_name = 'onnx-demo-resnet50'+str(randint(0,100))\n",
|
||||
"print(\"Service\", aci_service_name)\n",
|
||||
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
|
||||
"aci_service.wait_for_deployment(True)\n",
|
||||
"print(aci_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In case the deployment fails, you can check the logs. Make sure to delete your aci_service before trying again."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if aci_service.state != 'Healthy':\n",
|
||||
" # run this command for debugging.\n",
|
||||
" print(aci_service.get_logs())\n",
|
||||
" aci_service.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Success!\n",
|
||||
"\n",
|
||||
"If you've made it this far, you've deployed a working web service that does image classification using an ONNX model. You can get the URL for the webservice with the code below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(aci_service.scoring_uri)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"When you are eventually done using the web service, remember to delete it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aci_service.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "viswamy"
|
||||
}
|
||||
],
|
||||
"category": "deployment",
|
||||
"compute": [
|
||||
"Local"
|
||||
],
|
||||
"datasets": [
|
||||
"ImageNet"
|
||||
],
|
||||
"deployment": [
|
||||
"Azure Container Instance"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"framework": [
|
||||
"ONNX"
|
||||
],
|
||||
"friendly_name": "Deploy ResNet50 with ONNX Runtime",
|
||||
"index_order": 4,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.5"
|
||||
},
|
||||
"star_tag": [],
|
||||
"tags": [
|
||||
"ONNX Model Zoo"
|
||||
],
|
||||
"task": "Image Classification"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
name: onnx-modelzoo-aml-deploy-resnet50
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
File diff suppressed because one or more lines are too long
@@ -1,5 +0,0 @@
|
||||
name: onnx-train-pytorch-aml-deploy-mnist
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
@@ -1,355 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Deploying a web service to Azure Kubernetes Service (AKS)\n",
|
||||
"This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it. \n",
|
||||
"We then test and delete the service, image and model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"print(azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Get workspace\n",
|
||||
"Load existing workspace from the config file info."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Download the model\n",
|
||||
"\n",
|
||||
"Prior to registering the model, you should have a TensorFlow [Saved Model](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md) in the `resnet50` directory. This cell will download a [pretrained resnet50](http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz) and unpack it to that directory."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import requests\n",
|
||||
"import shutil\n",
|
||||
"import tarfile\n",
|
||||
"import tempfile\n",
|
||||
"\n",
|
||||
"from io import BytesIO\n",
|
||||
"\n",
|
||||
"model_url = \"http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz\"\n",
|
||||
"\n",
|
||||
"archive_prefix = \"./resnet_v1_fp32_savedmodel_NCHW_jpg/1538686758/\"\n",
|
||||
"target_folder = \"resnet50\"\n",
|
||||
"\n",
|
||||
"if not os.path.exists(target_folder):\n",
|
||||
" response = requests.get(model_url)\n",
|
||||
" archive = tarfile.open(fileobj=BytesIO(response.content))\n",
|
||||
" with tempfile.TemporaryDirectory() as temp_folder:\n",
|
||||
" archive.extractall(temp_folder)\n",
|
||||
" shutil.copytree(os.path.join(temp_folder, archive_prefix), target_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Register the model\n",
|
||||
"Register an existing trained model, add description and tags."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"model = Model.register(model_path=\"resnet50\", # This points to the local directory to upload.\n",
|
||||
" model_name=\"resnet50\", # This is the name the model is registered as.\n",
|
||||
" tags={'area': \"Image classification\", 'type': \"classification\"},\n",
|
||||
" description=\"Image classification trained on Imagenet Dataset\",\n",
|
||||
" workspace=ws)\n",
|
||||
"\n",
|
||||
"print(model.name, model.description, model.version)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Provision the AKS Cluster\n",
|
||||
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, AksCompute\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# Choose a name for your GPU cluster\n",
|
||||
"gpu_cluster_name = \"aks-gpu-cluster\"\n",
|
||||
"\n",
|
||||
"# Choose a location for your GPU cluster\n",
|
||||
"gpu_cluster_location = \"eastus\"\n",
|
||||
"\n",
|
||||
"# Verify that cluster does not exist already\n",
|
||||
"try:\n",
|
||||
" gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)\n",
|
||||
" print(\"Found existing gpu cluster\")\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print(\"Creating new gpu-cluster\")\n",
|
||||
" \n",
|
||||
" # Specify the configuration for the new cluster\n",
|
||||
" compute_config = AksCompute.provisioning_configuration(cluster_purpose=AksCompute.ClusterPurpose.DEV_TEST,\n",
|
||||
" agent_count=1,\n",
|
||||
" vm_size=\"Standard_NC6s_v3\",\n",
|
||||
" location=gpu_cluster_location)\n",
|
||||
" # Create the cluster with the specified name and configuration\n",
|
||||
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)\n",
|
||||
"\n",
|
||||
" # Wait for the cluster to complete, show the output log\n",
|
||||
" gpu_cluster.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Deploy the model as a web service to AKS\n",
|
||||
"\n",
|
||||
"First create a scoring script"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import tensorflow.compat.v1 as tf\n",
|
||||
"import numpy as np\n",
|
||||
"import json\n",
|
||||
"import os\n",
|
||||
"from azureml.contrib.services.aml_request import AMLRequest, rawhttp\n",
|
||||
"from azureml.contrib.services.aml_response import AMLResponse\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global session\n",
|
||||
" global input_name\n",
|
||||
" global output_name\n",
|
||||
" \n",
|
||||
" session = tf.Session()\n",
|
||||
"\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'resnet50')\n",
|
||||
" model = tf.saved_model.loader.load(session, ['serve'], model_path)\n",
|
||||
" if len(model.signature_def['serving_default'].inputs) > 1:\n",
|
||||
" raise ValueError(\"This score.py only supports one input\")\n",
|
||||
" input_name = [tensor.name for tensor in model.signature_def['serving_default'].inputs.values()][0]\n",
|
||||
" output_name = [tensor.name for tensor in model.signature_def['serving_default'].outputs.values()]\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"@rawhttp\n",
|
||||
"def run(request):\n",
|
||||
" if request.method == 'POST':\n",
|
||||
" reqBody = request.get_data(False)\n",
|
||||
" resp = score(reqBody)\n",
|
||||
" return AMLResponse(resp, 200)\n",
|
||||
" if request.method == 'GET':\n",
|
||||
" respBody = str.encode(\"GET is not supported\")\n",
|
||||
" return AMLResponse(respBody, 405)\n",
|
||||
" return AMLResponse(\"bad request\", 500)\n",
|
||||
"\n",
|
||||
"def score(data):\n",
|
||||
" result = session.run(output_name, {input_name: [data]})\n",
|
||||
" return json.dumps(result[1].tolist())\n",
|
||||
"\n",
|
||||
"if __name__ == \"__main__\":\n",
|
||||
" init()\n",
|
||||
" with open(\"test_image.jpg\", 'rb') as f:\n",
|
||||
" content = f.read()\n",
|
||||
" print(score(content))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now create the deployment configuration objects and deploy the model as a webservice."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set the web service configuration (using default here)\n",
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"from azureml.core.webservice import AksWebservice\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE\n",
|
||||
"\n",
|
||||
"env = Environment('deploytocloudenv')\n",
|
||||
"# Please see [Azure ML Containers repository](https://github.com/Azure/AzureML-Containers#featured-tags)\n",
|
||||
"# for open-sourced GPU base images.\n",
|
||||
"env.docker.base_image = DEFAULT_GPU_IMAGE\n",
|
||||
"env.python.conda_dependencies = CondaDependencies.create(python_version=\"3.8\", pin_sdk_version=False,\n",
|
||||
" conda_packages=['tensorflow-gpu','numpy'],\n",
|
||||
" pip_packages=['azureml-contrib-services', 'azureml-defaults'])\n",
|
||||
"\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=env)\n",
|
||||
"aks_config = AksWebservice.deploy_configuration()\n",
|
||||
"\n",
|
||||
"# # Enable token auth and disable (key) auth on the webservice\n",
|
||||
"# aks_config = AksWebservice.deploy_configuration(token_auth_enabled=True, auth_enabled=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"aks_service_name ='gpu-rn50'\n",
|
||||
"\n",
|
||||
"aks_service = Model.deploy(workspace=ws,\n",
|
||||
" name=aks_service_name,\n",
|
||||
" models=[model],\n",
|
||||
" inference_config=inference_config,\n",
|
||||
" deployment_config=aks_config,\n",
|
||||
" deployment_target=gpu_cluster)\n",
|
||||
"\n",
|
||||
"aks_service.wait_for_deployment(show_output = True)\n",
|
||||
"print(aks_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Test the web service\n",
|
||||
"We test the web sevice by passing the test images content."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"# if (key) auth is enabled, fetch keys and include in the request\n",
|
||||
"key1, key2 = aks_service.get_keys()\n",
|
||||
"\n",
|
||||
"headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + key1}\n",
|
||||
"\n",
|
||||
"# # if token auth is enabled, fetch token and include in the request\n",
|
||||
"# access_token, fetch_after = aks_service.get_token()\n",
|
||||
"# headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + access_token}\n",
|
||||
"\n",
|
||||
"test_sample = open('snowleopardgaze.jpg', 'rb').read()\n",
|
||||
"resp = requests.post(aks_service.scoring_uri, test_sample, headers=headers)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Clean up\n",
|
||||
"Delete the service, image, model and compute target"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"aks_service.delete()\n",
|
||||
"model.delete()\n",
|
||||
"gpu_cluster.delete()\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "vaidyas"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
name: production-deploy-to-aks-gpu
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 61 KiB |
@@ -1,356 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Deploying a web service to Azure Kubernetes Service (AKS)\n",
|
||||
"This notebook shows the steps for deploying a service: registering a model, provisioning a cluster with ssl (one time action), and deploying a service to it. \n",
|
||||
"We then test and delete the service, image and model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"from azureml.core.compute import AksCompute, ComputeTarget\n",
|
||||
"from azureml.core.webservice import Webservice, AksWebservice\n",
|
||||
"from azureml.core.model import Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"print(azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Get workspace\n",
|
||||
"Load existing workspace from the config file info."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Register the model\n",
|
||||
"Register an existing trained model, add descirption and tags."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Register the model\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
|
||||
" model_name = \"sklearn_model\", # this is the name the model is registered as\n",
|
||||
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
|
||||
" description = \"Ridge regression model to predict diabetes\",\n",
|
||||
" workspace = ws)\n",
|
||||
"\n",
|
||||
"print(model.name, model.description, model.version)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Create the Environment\n",
|
||||
"Create an environment that the model will be deployed with"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
||||
"\n",
|
||||
"conda_deps = CondaDependencies.create(conda_packages=['numpy', 'scikit-learn==0.22.1', 'scipy'], pip_packages=['azureml-defaults', 'inference-schema'])\n",
|
||||
"myenv = Environment(name='myenv')\n",
|
||||
"myenv.python.conda_dependencies = conda_deps"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Use a custom Docker image\n",
|
||||
"\n",
|
||||
"You can also specify a custom Docker image to be used as base image if you don't want to use the default base image provided by Azure ML. Please make sure the custom Docker image has Ubuntu >= 16.04, Conda >= 4.5.\\* and Python(3.5.\\* or 3.6.\\*).\n",
|
||||
"\n",
|
||||
"Only supported with `python` runtime.\n",
|
||||
"```python\n",
|
||||
"# use an image available in public Container Registry without authentication\n",
|
||||
"myenv.docker.base_image = \"mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda\"\n",
|
||||
"\n",
|
||||
"# or, use an image available in a private Container Registry\n",
|
||||
"myenv.docker.base_image = \"myregistry.azurecr.io/mycustomimage:1.0\"\n",
|
||||
"myenv.docker.base_image_registry.address = \"myregistry.azurecr.io\"\n",
|
||||
"myenv.docker.base_image_registry.username = \"username\"\n",
|
||||
"myenv.docker.base_image_registry.password = \"password\"\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Write the Entry Script\n",
|
||||
"Write the script that will be used to predict on your model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score_ssl.py\n",
|
||||
"import os\n",
|
||||
"import pickle\n",
|
||||
"import json\n",
|
||||
"import numpy\n",
|
||||
"import joblib\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"from inference_schema.schema_decorators import input_schema, output_schema\n",
|
||||
"from inference_schema.parameter_types.standard_py_parameter_type import StandardPythonParameterType\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # deserialize the model file back into a sklearn model\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"standard_sample_input = {'a': 10, 'b': 9, 'c': 8, 'd': 7, 'e': 6, 'f': 5, 'g': 4, 'h': 3, 'i': 2, 'j': 1 }\n",
|
||||
"standard_sample_output = {'outcome': 1}\n",
|
||||
"\n",
|
||||
"@input_schema('param', StandardPythonParameterType(standard_sample_input))\n",
|
||||
"@output_schema(StandardPythonParameterType(standard_sample_output))\n",
|
||||
"def run(param):\n",
|
||||
" try:\n",
|
||||
" raw_data = [param['a'], param['b'], param['c'], param['d'], param['e'], param['f'], param['g'], param['h'], param['i'], param['j']]\n",
|
||||
" data = numpy.array([raw_data])\n",
|
||||
" result = model.predict(data)\n",
|
||||
" return { 'outcome' : result[0] }\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Create the InferenceConfig\n",
|
||||
"Create the inference config that will be used when deploying the model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"inf_config = InferenceConfig(entry_script='score_ssl.py', environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Provision the AKS Cluster with SSL\n",
|
||||
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it.\n",
|
||||
"\n",
|
||||
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
|
||||
"\n",
|
||||
"See code snippet below. Check the documentation [here](https://docs.microsoft.com/azure/machine-learning/v1/how-to-secure-web-service) for more details"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Use the default configuration (can also provide parameters to customize)\n",
|
||||
"\n",
|
||||
"provisioning_config = AksCompute.provisioning_configuration()\n",
|
||||
"# Leaf domain label generates a name using the formula\n",
|
||||
"# \"<leaf-domain-label>######.<azure-region>.cloudapp.azure.net\"\n",
|
||||
"# where \"######\" is a random series of characters\n",
|
||||
"provisioning_config.enable_ssl(leaf_domain_label = \"contoso\", overwrite_existing_domain = True)\n",
|
||||
"\n",
|
||||
"aks_name = 'my-aks-ssl-1' \n",
|
||||
"# Create the cluster\n",
|
||||
"aks_target = ComputeTarget.create(workspace = ws, \n",
|
||||
" name = aks_name, \n",
|
||||
" provisioning_configuration = provisioning_config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"aks_target.wait_for_completion(show_output = True)\n",
|
||||
"print(aks_target.provisioning_state)\n",
|
||||
"print(aks_target.provisioning_errors)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Deploy web service to AKS"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"sample-deploy-to-aks"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"\n",
|
||||
"aks_config = AksWebservice.deploy_configuration()\n",
|
||||
"\n",
|
||||
"aks_service_name ='aks-service-ssl-1'\n",
|
||||
"\n",
|
||||
"aks_service = Model.deploy(workspace=ws,\n",
|
||||
" name=aks_service_name,\n",
|
||||
" models=[model],\n",
|
||||
" inference_config=inf_config,\n",
|
||||
" deployment_config=aks_config,\n",
|
||||
" deployment_target=aks_target,\n",
|
||||
" overwrite=True)\n",
|
||||
"\n",
|
||||
"aks_service.wait_for_deployment(show_output = True)\n",
|
||||
"print(aks_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Test the web service using run method\n",
|
||||
"We test the web sevice by passing data.\n",
|
||||
"Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"import json\n",
|
||||
"\n",
|
||||
"standard_sample_input = json.dumps({'param': {'a': 10, 'b': 9, 'c': 8, 'd': 7, 'e': 6, 'f': 5, 'g': 4, 'h': 3, 'i': 2, 'j': 1 }})\n",
|
||||
"\n",
|
||||
"aks_service.run(input_data=standard_sample_input)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Clean up\n",
|
||||
"Delete the service, image and model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"aks_service.delete()\n",
|
||||
"model.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "vaidyas"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
name: production-deploy-to-aks-ssl
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- matplotlib
|
||||
- tqdm
|
||||
- scipy
|
||||
- scikit-learn
|
||||
@@ -1,625 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Deploying a web service to Azure Kubernetes Service (AKS)\n",
|
||||
"This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it. \n",
|
||||
"We then test and delete the service, image and model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"from azureml.core.compute import AksCompute, ComputeTarget\n",
|
||||
"from azureml.core.webservice import Webservice, AksWebservice\n",
|
||||
"from azureml.core.model import Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"print(azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Get workspace\n",
|
||||
"Load existing workspace from the config file info."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Register the model\n",
|
||||
"Register an existing trained model, add descirption and tags."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Register the model\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"model = Model.register(model_path = \"sklearn_regression_model.pkl\", # this points to a local file\n",
|
||||
" model_name = \"sklearn_regression_model.pkl\", # this is the name the model is registered as\n",
|
||||
" tags = {'area': \"diabetes\", 'type': \"regression\"},\n",
|
||||
" description = \"Ridge regression model to predict diabetes\",\n",
|
||||
" workspace = ws)\n",
|
||||
"\n",
|
||||
"print(model.name, model.description, model.version)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Create the Environment\n",
|
||||
"Create an environment that the model will be deployed with"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
||||
"\n",
|
||||
"conda_deps = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.22.1','scipy'], pip_packages=['azureml-defaults', 'inference-schema'])\n",
|
||||
"myenv = Environment(name='myenv')\n",
|
||||
"myenv.python.conda_dependencies = conda_deps"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Use a custom Docker image\n",
|
||||
"\n",
|
||||
"You can also specify a custom Docker image to be used as base image if you don't want to use the default base image provided by Azure ML. Please make sure the custom Docker image has Ubuntu >= 16.04, Conda >= 4.5.\\* and Python(3.5.\\* or 3.6.\\*).\n",
|
||||
"\n",
|
||||
"Only supported with `python` runtime.\n",
|
||||
"```python\n",
|
||||
"# use an image available in public Container Registry without authentication\n",
|
||||
"myenv.docker.base_image = \"mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda\"\n",
|
||||
"\n",
|
||||
"# or, use an image available in a private Container Registry\n",
|
||||
"myenv.docker.base_image = \"myregistry.azurecr.io/mycustomimage:1.0\"\n",
|
||||
"myenv.docker.base_image_registry.address = \"myregistry.azurecr.io\"\n",
|
||||
"myenv.docker.base_image_registry.username = \"username\"\n",
|
||||
"myenv.docker.base_image_registry.password = \"password\"\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Write the Entry Script\n",
|
||||
"Write the script that will be used to predict on your model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import os\n",
|
||||
"import pickle\n",
|
||||
"import json\n",
|
||||
"import numpy\n",
|
||||
"import joblib\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global model\n",
|
||||
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')\n",
|
||||
" # deserialize the model file back into a sklearn model\n",
|
||||
" model = joblib.load(model_path)\n",
|
||||
"\n",
|
||||
"# note you can pass in multiple rows for scoring\n",
|
||||
"def run(raw_data):\n",
|
||||
" try:\n",
|
||||
" data = json.loads(raw_data)['data']\n",
|
||||
" data = numpy.array(data)\n",
|
||||
" result = model.predict(data)\n",
|
||||
" # you can return any data type as long as it is JSON-serializable\n",
|
||||
" return result.tolist()\n",
|
||||
" except Exception as e:\n",
|
||||
" error = str(e)\n",
|
||||
" return error"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Create the InferenceConfig\n",
|
||||
"Create the inference config that will be used when deploying the model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import InferenceConfig\n",
|
||||
"\n",
|
||||
"inf_config = InferenceConfig(entry_script='score.py', environment=myenv)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Model Profiling\n",
|
||||
"\n",
|
||||
"Profile your model to understand how much CPU and memory the service, created as a result of its deployment, will need. Profiling returns information such as CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage. You can profile your model (or more precisely the service built based on your model) on any CPU and/or memory combination where 0.1 <= CPU <= 3.5 and 0.1GB <= memory <= 15GB. If you do not provide a CPU and/or memory requirement, we will test it on the default configuration of 3.5 CPU and 15GB memory.\n",
|
||||
"\n",
|
||||
"In order to profile your model you will need:\n",
|
||||
"- a registered model\n",
|
||||
"- an entry script\n",
|
||||
"- an inference configuration\n",
|
||||
"- a single column tabular dataset, where each row contains a string representing sample request data sent to the service.\n",
|
||||
"\n",
|
||||
"Please, note that profiling is a long running operation and can take up to 25 minutes depending on the size of the dataset.\n",
|
||||
"\n",
|
||||
"At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.\n",
|
||||
"\n",
|
||||
"Below is an example of how you can construct an input dataset to profile a service which expects its incoming requests to contain serialized json. In this case we created a dataset based one hundred instances of the same request data. In real world scenarios however, we suggest that you use larger datasets with various inputs, especially if your model resource usage/behavior is input dependent."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You may want to register datasets using the register() method to your workspace so they can be shared with others, reused and referred to by name in your script.\n",
|
||||
"You can try get the dataset first to see if it's already registered."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"from azureml.core import Datastore\n",
|
||||
"from azureml.core.dataset import Dataset\n",
|
||||
"from azureml.data import dataset_type_definitions\n",
|
||||
"\n",
|
||||
"dataset_name='sample_request_data'\n",
|
||||
"\n",
|
||||
"dataset_registered = False\n",
|
||||
"try:\n",
|
||||
" sample_request_data = Dataset.get_by_name(workspace = ws, name = dataset_name)\n",
|
||||
" dataset_registered = True\n",
|
||||
"except:\n",
|
||||
" print(\"The dataset {} is not registered in workspace yet.\".format(dataset_name))\n",
|
||||
"\n",
|
||||
"if not dataset_registered:\n",
|
||||
" input_json = {'data': [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n",
|
||||
" [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}\n",
|
||||
" # create a string that can be put in the body of the request\n",
|
||||
" serialized_input_json = json.dumps(input_json)\n",
|
||||
" dataset_content = []\n",
|
||||
" for i in range(100):\n",
|
||||
" dataset_content.append(serialized_input_json)\n",
|
||||
" sample_request_data = '\\n'.join(dataset_content)\n",
|
||||
" file_name = \"{}.txt\".format(dataset_name)\n",
|
||||
" f = open(file_name, 'w')\n",
|
||||
" f.write(sample_request_data)\n",
|
||||
" f.close()\n",
|
||||
"\n",
|
||||
" # upload the txt file created above to the Datastore and create a dataset from it\n",
|
||||
" data_store = Datastore.get_default(ws)\n",
|
||||
" data_store.upload_files(['./' + file_name], target_path='sample_request_data')\n",
|
||||
" datastore_path = [(data_store, 'sample_request_data' +'/' + file_name)]\n",
|
||||
" sample_request_data = Dataset.Tabular.from_delimited_files(\n",
|
||||
" datastore_path,\n",
|
||||
" separator='\\n',\n",
|
||||
" infer_column_types=True,\n",
|
||||
" header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)\n",
|
||||
" sample_request_data = sample_request_data.register(workspace=ws,\n",
|
||||
" name=dataset_name,\n",
|
||||
" create_new_version=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that we have an input dataset we are ready to go ahead with profiling. In this case we are testing the previously introduced sklearn regression model on 1 CPU and 0.5 GB memory. The memory usage and recommendation presented in the result is measured in Gigabytes. The CPU usage and recommendation is measured in CPU cores."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from datetime import datetime\n",
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"from azureml.core.model import Model, InferenceConfig\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"environment = Environment('my-sklearn-environment')\n",
|
||||
"environment.python.conda_dependencies = CondaDependencies.create(conda_packages=[\n",
|
||||
" 'pip==20.2.4'],\n",
|
||||
" pip_packages=[\n",
|
||||
" 'azureml-defaults',\n",
|
||||
" 'inference-schema[numpy-support]',\n",
|
||||
" 'joblib',\n",
|
||||
" 'numpy',\n",
|
||||
" 'scikit-learn==0.22.1',\n",
|
||||
" 'scipy'\n",
|
||||
"])\n",
|
||||
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
|
||||
"# if cpu and memory_in_gb parameters are not provided\n",
|
||||
"# the model will be profiled on default configuration of\n",
|
||||
"# 3.5CPU and 15GB memory\n",
|
||||
"profile = Model.profile(ws,\n",
|
||||
" 'sklearn-%s' % datetime.now().strftime('%m%d%Y-%H%M%S'),\n",
|
||||
" [model],\n",
|
||||
" inference_config,\n",
|
||||
" input_dataset=sample_request_data,\n",
|
||||
" cpu=1.0,\n",
|
||||
" memory_in_gb=0.5)\n",
|
||||
"\n",
|
||||
"# profiling is a long running operation and may take up to 25 min\n",
|
||||
"profile.wait_for_completion(True)\n",
|
||||
"details = profile.get_details()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Provision the AKS Cluster\n",
|
||||
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it.\n",
|
||||
"\n",
|
||||
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# Choose a name for your AKS cluster\n",
|
||||
"aks_name = 'my-aks-9' \n",
|
||||
"\n",
|
||||
"# Verify that cluster does not exist already\n",
|
||||
"try:\n",
|
||||
" aks_target = ComputeTarget(workspace=ws, name=aks_name)\n",
|
||||
" print('Found existing cluster, use it.')\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" # Use the default configuration (can also provide parameters to customize)\n",
|
||||
" prov_config = AksCompute.provisioning_configuration()\n",
|
||||
"\n",
|
||||
" # Create the cluster\n",
|
||||
" aks_target = ComputeTarget.create(workspace = ws, \n",
|
||||
" name = aks_name, \n",
|
||||
" provisioning_configuration = prov_config)\n",
|
||||
"\n",
|
||||
"if aks_target.get_status() != \"Succeeded\":\n",
|
||||
" aks_target.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Create AKS Cluster in an existing virtual network (optional)\n",
|
||||
"See code snippet below. Check the documentation [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-network-security-overview) for more details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# from azureml.core.compute import ComputeTarget, AksCompute\n",
|
||||
"\n",
|
||||
"# # Create the compute configuration and set virtual network information\n",
|
||||
"# config = AksCompute.provisioning_configuration(location=\"eastus2\")\n",
|
||||
"# config.vnet_resourcegroup_name = \"mygroup\"\n",
|
||||
"# config.vnet_name = \"mynetwork\"\n",
|
||||
"# config.subnet_name = \"default\"\n",
|
||||
"# config.service_cidr = \"10.0.0.0/16\"\n",
|
||||
"# config.dns_service_ip = \"10.0.0.10\"\n",
|
||||
"# config.docker_bridge_cidr = \"172.17.0.1/16\"\n",
|
||||
"\n",
|
||||
"# # Create the compute target\n",
|
||||
"# aks_target = ComputeTarget.create(workspace = ws,\n",
|
||||
"# name = \"myaks\",\n",
|
||||
"# provisioning_configuration = config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Enable SSL on the AKS Cluster (optional)\n",
|
||||
"See code snippet below. Check the documentation [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-network-security-overview#secure-the-inferencing-environment-v1) for more details"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# provisioning_config = AksCompute.provisioning_configuration(ssl_cert_pem_file=\"cert.pem\", ssl_key_pem_file=\"key.pem\", ssl_cname=\"www.contoso.com\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"aks_target.wait_for_completion(show_output = True)\n",
|
||||
"print(aks_target.provisioning_state)\n",
|
||||
"print(aks_target.provisioning_errors)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Optional step: Attach existing AKS cluster\n",
|
||||
"\n",
|
||||
"If you have existing AKS cluster in your Azure subscription, you can attach it to the Workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Use the default configuration (can also provide parameters to customize)\n",
|
||||
"# resource_id = '/subscriptions/92c76a2f-0e1c-4216-b65e-abf7a3f34c1e/resourcegroups/raymondsdk0604/providers/Microsoft.ContainerService/managedClusters/my-aks-0605d37425356b7d01'\n",
|
||||
"\n",
|
||||
"# create_name='my-existing-aks' \n",
|
||||
"# # Create the cluster\n",
|
||||
"# attach_config = AksCompute.attach_configuration(resource_id=resource_id)\n",
|
||||
"# aks_target = ComputeTarget.attach(workspace=ws, name=create_name, attach_configuration=attach_config)\n",
|
||||
"# # Wait for the operation to complete\n",
|
||||
"# aks_target.wait_for_completion(True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Deploy web service to AKS"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"sample-deploy-to-aks"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set the web service configuration (using default here)\n",
|
||||
"aks_config = AksWebservice.deploy_configuration()\n",
|
||||
"\n",
|
||||
"# # Enable token auth and disable (key) auth on the webservice\n",
|
||||
"# aks_config = AksWebservice.deploy_configuration(token_auth_enabled=True, auth_enabled=False)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"sample-deploy-to-aks"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"aks_service_name ='aks-service-1'\n",
|
||||
"\n",
|
||||
"aks_service = Model.deploy(workspace=ws,\n",
|
||||
" name=aks_service_name,\n",
|
||||
" models=[model],\n",
|
||||
" inference_config=inf_config,\n",
|
||||
" deployment_config=aks_config,\n",
|
||||
" deployment_target=aks_target)\n",
|
||||
"\n",
|
||||
"aks_service.wait_for_deployment(show_output = True)\n",
|
||||
"print(aks_service.state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Test the web service using run method\n",
|
||||
"We test the web sevice by passing data.\n",
|
||||
"Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"import json\n",
|
||||
"\n",
|
||||
"test_sample = json.dumps({'data': [\n",
|
||||
" [1,2,3,4,5,6,7,8,9,10], \n",
|
||||
" [10,9,8,7,6,5,4,3,2,1]\n",
|
||||
"]})\n",
|
||||
"test_sample = bytes(test_sample,encoding = 'utf8')\n",
|
||||
"\n",
|
||||
"prediction = aks_service.run(input_data = test_sample)\n",
|
||||
"print(prediction)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Test the web service using raw HTTP request (optional)\n",
|
||||
"Alternatively you can construct a raw HTTP request and send it to the service. In this case you need to explicitly pass the HTTP header. This process is shown in the next 2 cells."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # if (key) auth is enabled, retrieve the API keys. AML generates two keys.\n",
|
||||
"# key1, Key2 = aks_service.get_keys()\n",
|
||||
"# print(key1)\n",
|
||||
"\n",
|
||||
"# # if token auth is enabled, retrieve the token.\n",
|
||||
"# access_token, refresh_after = aks_service.get_token()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# construct raw HTTP request and send to the service\n",
|
||||
"# %%time\n",
|
||||
"\n",
|
||||
"# import requests\n",
|
||||
"\n",
|
||||
"# import json\n",
|
||||
"\n",
|
||||
"# test_sample = json.dumps({'data': [\n",
|
||||
"# [1,2,3,4,5,6,7,8,9,10], \n",
|
||||
"# [10,9,8,7,6,5,4,3,2,1]\n",
|
||||
"# ]})\n",
|
||||
"# test_sample = bytes(test_sample,encoding = 'utf8')\n",
|
||||
"\n",
|
||||
"# # If (key) auth is enabled, don't forget to add key to the HTTP header.\n",
|
||||
"# headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + key1}\n",
|
||||
"\n",
|
||||
"# # If token auth is enabled, don't forget to add token to the HTTP header.\n",
|
||||
"# headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + access_token}\n",
|
||||
"\n",
|
||||
"# resp = requests.post(aks_service.scoring_uri, test_sample, headers=headers)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# print(\"prediction:\", resp.text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Clean up\n",
|
||||
"Delete the service, image and model."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"aks_service.delete()\n",
|
||||
"model.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "vaidyas"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
name: production-deploy-to-aks
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- matplotlib
|
||||
- tqdm
|
||||
- scipy
|
||||
- scikit-learn
|
||||
Binary file not shown.
@@ -53,7 +53,7 @@
|
||||
"\n",
|
||||
"We will showcase one of the tabular data explainers: TabularExplainer (SHAP).\n",
|
||||
"\n",
|
||||
"Problem: Boston Housing Price Prediction with scikit-learn (train a model and run an explainer remotely via AMLCompute, and download and visualize the remotely-calculated explanations.)\n",
|
||||
"Problem: Housing Price Prediction with scikit-learn (train a model and run an explainer remotely via AMLCompute, and download and visualize the remotely-calculated explanations.)\n",
|
||||
"\n",
|
||||
"|  |\n",
|
||||
"|:--:|\n"
|
||||
@@ -270,6 +270,7 @@
|
||||
"sklearn_ver = None\n",
|
||||
"pandas_ver = None\n",
|
||||
"joblib_ver = None\n",
|
||||
"scipy_ver = None\n",
|
||||
"for dist in list(available_packages):\n",
|
||||
" if dist.key == 'scikit-learn':\n",
|
||||
" sklearn_ver = dist.version\n",
|
||||
@@ -277,21 +278,26 @@
|
||||
" pandas_ver = dist.version\n",
|
||||
" elif dist.key == 'joblib':\n",
|
||||
" joblib_ver = dist.version\n",
|
||||
" elif dist.key == 'scipy':\n",
|
||||
" scipy_ver = dist.version\n",
|
||||
"sklearn_dep = 'scikit-learn'\n",
|
||||
"pandas_dep = 'pandas'\n",
|
||||
"joblib_dep = 'joblib'\n",
|
||||
"scipy_dep = 'scipy'\n",
|
||||
"if sklearn_ver:\n",
|
||||
" sklearn_dep = 'scikit-learn=={}'.format(sklearn_ver)\n",
|
||||
"if pandas_ver:\n",
|
||||
" pandas_dep = 'pandas=={}'.format(pandas_ver)\n",
|
||||
"if joblib_ver:\n",
|
||||
" joblib_dep = 'joblib=={}'.format(joblib_ver)\n",
|
||||
"if scipy_ver:\n",
|
||||
" scipy_dep = 'scipy=={}'.format(scipy_ver)\n",
|
||||
"# Specify CondaDependencies obj\n",
|
||||
"# The CondaDependencies specifies the conda and pip packages that are installed in the environment\n",
|
||||
"# the submitted job is run in. Note the remote environment(s) needs to be similar to the local\n",
|
||||
"# environment, otherwise if a model is trained or deployed in a different environment this can\n",
|
||||
"# cause errors. Please take extra care when specifying your dependencies in a production environment.\n",
|
||||
"azureml_pip_packages.extend([sklearn_dep, pandas_dep, joblib_dep])\n",
|
||||
"azureml_pip_packages.extend([sklearn_dep, pandas_dep, joblib_dep, scipy_dep])\n",
|
||||
"run_config.environment.python.conda_dependencies = CondaDependencies.create(pip_packages=azureml_pip_packages, python_version=python_version)\n",
|
||||
"\n",
|
||||
"from azureml.core import ScriptRunConfig\n",
|
||||
@@ -423,8 +429,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Retrieve x_test for visualization\n",
|
||||
"x_test_path = './x_test_boston_housing.pkl'\n",
|
||||
"run.download_file('x_test_boston_housing.pkl', output_file_path=x_test_path)"
|
||||
"x_test_path = './x_test_california_housing.pkl'\n",
|
||||
"run.download_file('x_test_california_housing.pkl', output_file_path=x_test_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -433,7 +439,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"x_test = joblib.load('x_test_boston_housing.pkl')"
|
||||
"x_test = joblib.load('x_test_california_housing.pkl')"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
name: explain-model-on-amlcompute
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-interpret
|
||||
- flask
|
||||
- flask-cors
|
||||
- gevent>=1.3.6
|
||||
- ipython
|
||||
- matplotlib
|
||||
- ipywidgets
|
||||
- raiwidgets~=0.28.0
|
||||
- itsdangerous==2.0.1
|
||||
- markupsafe<2.1.0
|
||||
- scipy>=1.5.3
|
||||
- protobuf==3.20.0
|
||||
- jinja2==3.0.3
|
||||
@@ -1,7 +1,7 @@
|
||||
# Copyright (c) Microsoft. All rights reserved.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
from sklearn import datasets
|
||||
from sklearn.datasets import fetch_california_housing
|
||||
from sklearn.linear_model import Ridge
|
||||
from interpret.ext.blackbox import TabularExplainer
|
||||
from azureml.interpret import ExplanationClient
|
||||
@@ -14,20 +14,20 @@ import numpy as np
|
||||
OUTPUT_DIR = './outputs/'
|
||||
os.makedirs(OUTPUT_DIR, exist_ok=True)
|
||||
|
||||
boston_data = datasets.load_boston()
|
||||
california_data = fetch_california_housing()
|
||||
|
||||
run = Run.get_context()
|
||||
client = ExplanationClient.from_run(run)
|
||||
|
||||
X_train, X_test, y_train, y_test = train_test_split(boston_data.data,
|
||||
boston_data.target,
|
||||
X_train, X_test, y_train, y_test = train_test_split(california_data.data,
|
||||
california_data.target,
|
||||
test_size=0.2,
|
||||
random_state=0)
|
||||
# write x_test out as a pickle file for later visualization
|
||||
x_test_pkl = 'x_test.pkl'
|
||||
with open(x_test_pkl, 'wb') as file:
|
||||
joblib.dump(value=X_test, filename=os.path.join(OUTPUT_DIR, x_test_pkl))
|
||||
run.upload_file('x_test_boston_housing.pkl', os.path.join(OUTPUT_DIR, x_test_pkl))
|
||||
run.upload_file('x_test_california_housing.pkl', os.path.join(OUTPUT_DIR, x_test_pkl))
|
||||
|
||||
|
||||
alpha = 0.5
|
||||
@@ -50,7 +50,7 @@ original_model = run.register_model(model_name='model_explain_model_on_amlcomp',
|
||||
model_path='original_model.pkl')
|
||||
|
||||
# Explain predictions on your local machine
|
||||
tabular_explainer = TabularExplainer(model, X_train, features=boston_data.feature_names)
|
||||
tabular_explainer = TabularExplainer(model, X_train, features=california_data.feature_names)
|
||||
|
||||
# Explain overall model predictions (global explanation)
|
||||
# Passing in test dataset for evaluation examples - note it must be a representative sample of the original data
|
||||
@@ -60,5 +60,5 @@ global_explanation = tabular_explainer.explain_global(X_test)
|
||||
|
||||
# Uploading model explanation data for storage or visualization in webUX
|
||||
# The explanation can then be downloaded on any compute
|
||||
comment = 'Global explanation on regression model trained on boston dataset'
|
||||
comment = 'Global explanation on regression model trained on california dataset'
|
||||
client.upload_model_explanation(global_explanation, comment=comment, model_id=original_model.id)
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
name: save-retrieve-explanations-run-history
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-interpret
|
||||
- flask
|
||||
- flask-cors
|
||||
- gevent>=1.3.6
|
||||
- ipython
|
||||
- matplotlib
|
||||
- ipywidgets
|
||||
- raiwidgets~=0.28.0
|
||||
- packaging>=20.9
|
||||
- itsdangerous==2.0.1
|
||||
- markupsafe<2.1.0
|
||||
- scipy>=1.5.3
|
||||
- protobuf==3.20.0
|
||||
- jinja2==3.0.3
|
||||
@@ -370,7 +370,7 @@
|
||||
"# cause errors. Please take extra care when specifying your dependencies in a production environment.\n",
|
||||
"myenv = CondaDependencies.create(\n",
|
||||
" python_version=python_version,\n",
|
||||
" conda_packages=['pip==20.2.4', numpy_dep],\n",
|
||||
" conda_packages=['pip==22.3.1', numpy_dep],\n",
|
||||
" pip_packages=['pyyaml', sklearn_dep, pandas_dep, numba_dep] + azureml_pip_packages)\n",
|
||||
"\n",
|
||||
"with open(\"myenv.yml\",\"w\") as f:\n",
|
||||
@@ -404,7 +404,7 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, \n",
|
||||
" memory_gb=2, \n",
|
||||
" memory_gb=4, \n",
|
||||
" tags={\"data\": \"IBM_Attrition\", \n",
|
||||
" \"method\" : \"local_explanation\"}, \n",
|
||||
" description='Get local explanations for IBM Employee Attrition data')\n",
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
name: train-explain-model-locally-and-deploy
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-interpret
|
||||
- flask
|
||||
- flask-cors
|
||||
- gevent>=1.3.6
|
||||
- ipython
|
||||
- matplotlib
|
||||
- ipywidgets
|
||||
- raiwidgets~=0.28.0
|
||||
- packaging>=20.9
|
||||
- itsdangerous==2.0.1
|
||||
- markupsafe<2.1.0
|
||||
- scipy>=1.5.3
|
||||
- protobuf==3.20.0
|
||||
- jinja2==3.0.3
|
||||
@@ -1,18 +0,0 @@
|
||||
name: train-explain-model-on-amlcompute-and-deploy
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-interpret
|
||||
- flask
|
||||
- flask-cors
|
||||
- gevent>=1.3.6
|
||||
- ipython
|
||||
- matplotlib
|
||||
- azureml-core
|
||||
- ipywidgets
|
||||
- raiwidgets~=0.28.0
|
||||
- itsdangerous==2.0.1
|
||||
- markupsafe<2.1.0
|
||||
- scipy>=1.5.3
|
||||
- protobuf==3.20.0
|
||||
- jinja2==3.0.3
|
||||
@@ -33,7 +33,6 @@
|
||||
"| Data store | Supported as a source | Supported as a sink |\n",
|
||||
"| --- | --- | --- |\n",
|
||||
"| Azure Blob Storage | Yes | Yes |\n",
|
||||
"| Azure Data Lake Storage Gen 1 | Yes | Yes |\n",
|
||||
"| Azure Data Lake Storage Gen 2 | Yes | Yes |\n",
|
||||
"| Azure SQL Database | Yes | Yes |\n",
|
||||
"| Azure Database for PostgreSQL | Yes | Yes |\n",
|
||||
@@ -126,76 +125,29 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.exceptions import UserErrorException\n",
|
||||
"\n",
|
||||
"blob_datastore_name='MyBlobDatastore'\n",
|
||||
"account_name=os.getenv(\"BLOB_ACCOUNTNAME_62\", \"<my-account-name>\") # Storage account name\n",
|
||||
"container_name=os.getenv(\"BLOB_CONTAINER_62\", \"<my-container-name>\") # Name of Azure blob container\n",
|
||||
"account_key=os.getenv(\"BLOB_ACCOUNT_KEY_62\", \"<my-account-key>\") # Storage account key\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" blob_datastore = Datastore.get(ws, blob_datastore_name)\n",
|
||||
" print(\"Found Blob Datastore with name: %s\" % blob_datastore_name)\n",
|
||||
"except UserErrorException:\n",
|
||||
" blob_datastore = Datastore.register_azure_blob_container(\n",
|
||||
" workspace=ws,\n",
|
||||
" datastore_name=blob_datastore_name,\n",
|
||||
" account_name=account_name, # Storage account name\n",
|
||||
" container_name=container_name, # Name of Azure blob container\n",
|
||||
" account_key=account_key) # Storage account key\n",
|
||||
" print(\"Registered blob datastore with name: %s\" % blob_datastore_name)\n",
|
||||
"\n",
|
||||
"blob_data_ref = DataReference(\n",
|
||||
" datastore=blob_datastore,\n",
|
||||
" data_reference_name=\"blob_test_data\",\n",
|
||||
" path_on_datastore=\"testdata\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Azure Data Lake Storage Gen1\n",
|
||||
"\n",
|
||||
"Please consult the following articles for detailed steps on setting up service principal authentication and assigning correct permissions to Data Lake Storage account:\n",
|
||||
"\n",
|
||||
"https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory\n",
|
||||
"https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-data-lake-store#use-service-principal-authentication"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"datastore_name='MyAdlsDatastore'\n",
|
||||
"subscription_id=os.getenv(\"ADL_SUBSCRIPTION_62\", \"<my-subscription-id>\") # subscription id of ADLS account\n",
|
||||
"resource_group=os.getenv(\"ADL_RESOURCE_GROUP_62\", \"<my-resource-group>\") # resource group of ADLS account\n",
|
||||
"store_name=os.getenv(\"ADL_STORENAME_62\", \"<my-datastore-name>\") # ADLS account name\n",
|
||||
"tenant_id=os.getenv(\"ADL_TENANT_62\", \"<my-tenant-id>\") # tenant id of service principal\n",
|
||||
"client_id=os.getenv(\"ADL_CLIENTID_62\", \"<my-client-id>\") # client id of service principal\n",
|
||||
"client_st=os.getenv(\"ADL_CLIENT_SECRET_62\", \"<my-client-secret>\") # the secret of service principal\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" adls_datastore = Datastore.get(ws, datastore_name)\n",
|
||||
" print(\"Found datastore with name: %s\" % datastore_name)\n",
|
||||
"except UserErrorException:\n",
|
||||
" adls_datastore = Datastore.register_azure_data_lake(\n",
|
||||
" workspace=ws,\n",
|
||||
" datastore_name=datastore_name,\n",
|
||||
" subscription_id=subscription_id, # subscription id of ADLS account\n",
|
||||
" resource_group=resource_group, # resource group of ADLS account\n",
|
||||
" store_name=store_name, # ADLS account name\n",
|
||||
" tenant_id=tenant_id, # tenant id of service principal\n",
|
||||
" client_id=client_id, # client id of service principal\n",
|
||||
" client_secret=client_st) # the secret of service principal\n",
|
||||
" print(\"Registered datastore with name: %s\" % datastore_name)\n",
|
||||
"\n",
|
||||
"adls_data_ref = DataReference(\n",
|
||||
" datastore=adls_datastore,\n",
|
||||
" data_reference_name=\"adls_test_data\",\n",
|
||||
" path_on_datastore=\"testdata\")"
|
||||
"# from azureml.exceptions import UserErrorException\n",
|
||||
"#\n",
|
||||
"# blob_datastore_name='MyBlobDatastore'\n",
|
||||
"# account_name=os.getenv(\"BLOB_ACCOUNTNAME_62\", \"<my-account-name>\") # Storage account name\n",
|
||||
"# container_name=os.getenv(\"BLOB_CONTAINER_62\", \"<my-container-name>\") # Name of Azure blob container\n",
|
||||
"# account_key=os.getenv(\"BLOB_ACCOUNT_KEY_62\", \"<my-account-key>\") # Storage account key\n",
|
||||
"#\n",
|
||||
"# try:\n",
|
||||
"# blob_datastore = Datastore.get(ws, blob_datastore_name)\n",
|
||||
"# print(\"Found Blob Datastore with name: %s\" % blob_datastore_name)\n",
|
||||
"# except UserErrorException:\n",
|
||||
"# blob_datastore = Datastore.register_azure_blob_container(\n",
|
||||
"# workspace=ws,\n",
|
||||
"# datastore_name=blob_datastore_name,\n",
|
||||
"# account_name=account_name, # Storage account name\n",
|
||||
"# container_name=container_name, # Name of Azure blob container\n",
|
||||
"# account_key=account_key) # Storage account key\n",
|
||||
"# print(\"Registered blob datastore with name: %s\" % blob_datastore_name)\n",
|
||||
"#\n",
|
||||
"# blob_data_ref = DataReference(\n",
|
||||
"# datastore=blob_datastore,\n",
|
||||
"# data_reference_name=\"blob_test_data\",\n",
|
||||
"# path_on_datastore=\"testdata\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -389,24 +341,24 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data_factory_name = 'adftest'\n",
|
||||
"\n",
|
||||
"def get_or_create_data_factory(workspace, factory_name):\n",
|
||||
" try:\n",
|
||||
" return DataFactoryCompute(workspace, factory_name)\n",
|
||||
" except ComputeTargetException as e:\n",
|
||||
" if 'ComputeTargetNotFound' in e.message:\n",
|
||||
" print('Data factory not found, creating...')\n",
|
||||
" provisioning_config = DataFactoryCompute.provisioning_configuration()\n",
|
||||
" data_factory = ComputeTarget.create(workspace, factory_name, provisioning_config)\n",
|
||||
" data_factory.wait_for_completion()\n",
|
||||
" return data_factory\n",
|
||||
" else:\n",
|
||||
" raise e\n",
|
||||
" \n",
|
||||
"data_factory_compute = get_or_create_data_factory(ws, data_factory_name)\n",
|
||||
"\n",
|
||||
"print(\"Setup Azure Data Factory account complete\")"
|
||||
"# data_factory_name = 'adftest'\n",
|
||||
"#\n",
|
||||
"# def get_or_create_data_factory(workspace, factory_name):\n",
|
||||
"# try:\n",
|
||||
"# return DataFactoryCompute(workspace, factory_name)\n",
|
||||
"# except ComputeTargetException as e:\n",
|
||||
"# if 'ComputeTargetNotFound' in e.message:\n",
|
||||
"# print('Data factory not found, creating...')\n",
|
||||
"# provisioning_config = DataFactoryCompute.provisioning_configuration()\n",
|
||||
"# data_factory = ComputeTarget.create(workspace, factory_name, provisioning_config)\n",
|
||||
"# data_factory.wait_for_completion()\n",
|
||||
"# return data_factory\n",
|
||||
"# else:\n",
|
||||
"# raise e\n",
|
||||
"#\n",
|
||||
"# data_factory_compute = get_or_create_data_factory(ws, data_factory_name)\n",
|
||||
"#\n",
|
||||
"# print(\"Setup Azure Data Factory account complete\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -440,13 +392,21 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"transfer_adls_to_blob = DataTransferStep(\n",
|
||||
" name=\"transfer_adls_to_blob\",\n",
|
||||
" source_data_reference=adls_data_ref,\n",
|
||||
" destination_data_reference=blob_data_ref,\n",
|
||||
" compute_target=data_factory_compute)\n",
|
||||
"\n",
|
||||
"print(\"Data transfer step created\")"
|
||||
"# # TODO: 3012801 - Use ADLS Gen2 datastore.\n",
|
||||
"# blob_data_ref2 = DataReference(\n",
|
||||
"# datastore=blob_datastore,\n",
|
||||
"# data_reference_name=\"blob_test_data2\",\n",
|
||||
"# path_on_datastore=\"testdata2\")\n",
|
||||
"#\n",
|
||||
"# transfer_adls_to_blob = DataTransferStep(\n",
|
||||
"# name=\"transfer_adls_to_blob\",\n",
|
||||
"# source_data_reference=blob_data_ref,\n",
|
||||
"# destination_data_reference=blob_data_ref2,\n",
|
||||
"# compute_target=data_factory_compute,\n",
|
||||
"# source_reference_type='file',\n",
|
||||
"# destination_reference_type=\"file\")\n",
|
||||
"#\n",
|
||||
"# print(\"Data transfer step created\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -497,13 +457,13 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline_01 = Pipeline(\n",
|
||||
" description=\"data_transfer_01\",\n",
|
||||
" workspace=ws,\n",
|
||||
" steps=[transfer_adls_to_blob])\n",
|
||||
"\n",
|
||||
"pipeline_run_01 = Experiment(ws, \"Data_Transfer_example_01\").submit(pipeline_01)\n",
|
||||
"pipeline_run_01.wait_for_completion()"
|
||||
"# pipeline_01 = Pipeline(\n",
|
||||
"# description=\"data_transfer_01\",\n",
|
||||
"# workspace=ws,\n",
|
||||
"# steps=[transfer_adls_to_blob])\n",
|
||||
"#\n",
|
||||
"# pipeline_run_01 = Experiment(ws, \"Data_Transfer_example_01\").submit(pipeline_01)\n",
|
||||
"# pipeline_run_01.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -534,8 +494,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(pipeline_run_01).show()"
|
||||
"# from azureml.widgets import RunDetails\n",
|
||||
"# RunDetails(pipeline_run_01).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
name: aml-pipelines-data-transfer
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
@@ -1,6 +0,0 @@
|
||||
name: aml-pipelines-getting-started
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
- protobuf==3.20.0
|
||||
@@ -1,5 +0,0 @@
|
||||
name: aml-pipelines-how-to-use-modulestep
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
@@ -1,5 +0,0 @@
|
||||
name: aml-pipelines-how-to-use-pipeline-drafts
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
@@ -292,7 +292,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tf_env = Environment.get(ws, name='AzureML-tensorflow-2.6-ubuntu20.04-py38-cuda11-gpu')"
|
||||
"tf_env = Environment.get(ws, name='AzureML-tensorflow-2.16-cuda12')"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
name: aml-pipelines-parameter-tuning-with-hyperdrive
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
- numpy
|
||||
- pandas_ml
|
||||
- azureml-dataset-runtime[pandas,fuse]
|
||||
@@ -1,6 +0,0 @@
|
||||
name: aml-pipelines-publish-and-run-using-rest-endpoint
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
- requests
|
||||
@@ -1,5 +0,0 @@
|
||||
name: aml-pipelines-setup-schedule-for-a-published-pipeline
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
@@ -1,6 +0,0 @@
|
||||
name: aml-pipelines-setup-versioned-pipeline-endpoints
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
- requests
|
||||
@@ -1,5 +0,0 @@
|
||||
name: aml-pipelines-showcasing-datapath-and-pipelineparameter
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-widgets
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user