mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-19 17:17:04 -05:00
918 lines
35 KiB
Plaintext
918 lines
35 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
"\n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Automated Machine Learning\n",
|
|
"_**Classification of credit card fraudulent transactions with local run **_\n",
|
|
"\n",
|
|
"## Contents\n",
|
|
"1. [Introduction](#Introduction)\n",
|
|
"1. [Setup](#Setup)\n",
|
|
"1. [Train](#Train)\n",
|
|
"1. [Results](#Results)\n",
|
|
"1. [Test](#Tests)\n",
|
|
"1. [Explanation](#Explanation)\n",
|
|
"1. [Acknowledgements](#Acknowledgements)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Introduction\n",
|
|
"\n",
|
|
"In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.\n",
|
|
"\n",
|
|
"This notebook is using the local machine compute to train the model.\n",
|
|
"\n",
|
|
"If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n",
|
|
"\n",
|
|
"In this notebook you will learn how to:\n",
|
|
"1. Create an experiment using an existing workspace.\n",
|
|
"2. Configure AutoML using `AutoMLConfig`.\n",
|
|
"3. Train the model.\n",
|
|
"4. Explore the results.\n",
|
|
"5. Test the fitted model.\n",
|
|
"6. Explore any model's explanation and explore feature importance in azure portal.\n",
|
|
"7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Setup\n",
|
|
"\n",
|
|
"As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import logging\n",
|
|
"\n",
|
|
"from matplotlib import pyplot as plt\n",
|
|
"import pandas as pd\n",
|
|
"\n",
|
|
"import azureml.core\n",
|
|
"from azureml.core.experiment import Experiment\n",
|
|
"from azureml.core.workspace import Workspace\n",
|
|
"from azureml.core.dataset import Dataset\n",
|
|
"from azureml.train.automl import AutoMLConfig\n",
|
|
"from azureml.interpret import ExplanationClient"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This sample notebook may use features that are not available in previous versions of the Azure ML SDK."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"ws = Workspace.from_config()\n",
|
|
"\n",
|
|
"# choose a name for experiment\n",
|
|
"experiment_name = \"automl-classification-ccard-local\"\n",
|
|
"\n",
|
|
"experiment = Experiment(ws, experiment_name)\n",
|
|
"\n",
|
|
"output = {}\n",
|
|
"output[\"Subscription ID\"] = ws.subscription_id\n",
|
|
"output[\"Workspace\"] = ws.name\n",
|
|
"output[\"Resource Group\"] = ws.resource_group\n",
|
|
"output[\"Location\"] = ws.location\n",
|
|
"output[\"Experiment Name\"] = experiment.name\n",
|
|
"output[\"SDK Version\"] = azureml.core.VERSION\n",
|
|
"pd.set_option(\"display.max_colwidth\", None)\n",
|
|
"outputDf = pd.DataFrame(data=output, index=[\"\"])\n",
|
|
"outputDf.T"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Load Data\n",
|
|
"\n",
|
|
"Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n",
|
|
"dataset = Dataset.Tabular.from_delimited_files(data)\n",
|
|
"training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n",
|
|
"label_column_name = \"Class\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Train\n",
|
|
"\n",
|
|
"Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n",
|
|
"\n",
|
|
"|Property|Description|\n",
|
|
"|-|-|\n",
|
|
"|**task**|classification or regression|\n",
|
|
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
|
"|**enable_early_stopping**|Stop the run if the metric score is not showing improvement.|\n",
|
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
|
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
|
"|**label_column_name**|The name of the label column.|\n",
|
|
"\n",
|
|
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"name": "enable-ensemble"
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"automl_settings = {\n",
|
|
" \"n_cross_validations\": 3,\n",
|
|
" \"primary_metric\": \"average_precision_score_weighted\",\n",
|
|
" \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible\n",
|
|
" \"verbosity\": logging.INFO,\n",
|
|
" \"enable_stack_ensemble\": False,\n",
|
|
"}\n",
|
|
"\n",
|
|
"automl_config = AutoMLConfig(\n",
|
|
" task=\"classification\",\n",
|
|
" debug_log=\"automl_errors.log\",\n",
|
|
" training_data=training_data,\n",
|
|
" label_column_name=label_column_name,\n",
|
|
" **automl_settings,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.\n",
|
|
"In this example, we specify `show_output = True` to print currently running iterations to the console."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"local_run = experiment.submit(automl_config, show_output=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# If you need to retrieve a run that already started, use the following code\n",
|
|
"# from azureml.train.automl.run import AutoMLRun\n",
|
|
"# local_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Results"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Widget for Monitoring Runs\n",
|
|
"\n",
|
|
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
|
|
"\n",
|
|
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.widgets import RunDetails\n",
|
|
"\n",
|
|
"RunDetails(local_run).show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Analyze results\n",
|
|
"\n",
|
|
"#### Retrieve the Best Model\n",
|
|
"\n",
|
|
"Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"best_run, fitted_model = local_run.get_output()\n",
|
|
"fitted_model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Print the properties of the model\n",
|
|
"The fitted_model is a python object and you can read the different properties of the object.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Tests\n",
|
|
"\n",
|
|
"Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# convert the test data to dataframe\n",
|
|
"X_test_df = validation_data.drop_columns(\n",
|
|
" columns=[label_column_name]\n",
|
|
").to_pandas_dataframe()\n",
|
|
"y_test_df = validation_data.keep_columns(\n",
|
|
" columns=[label_column_name], validate=True\n",
|
|
").to_pandas_dataframe()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# call the predict functions on the model\n",
|
|
"y_pred = fitted_model.predict(X_test_df)\n",
|
|
"y_pred"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Calculate metrics for the prediction\n",
|
|
"\n",
|
|
"Now visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values \n",
|
|
"from the trained model that was returned."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from sklearn.metrics import confusion_matrix\n",
|
|
"import numpy as np\n",
|
|
"import itertools\n",
|
|
"\n",
|
|
"cf = confusion_matrix(y_test_df.values, y_pred)\n",
|
|
"plt.imshow(cf, cmap=plt.cm.Blues, interpolation=\"nearest\")\n",
|
|
"plt.colorbar()\n",
|
|
"plt.title(\"Confusion Matrix\")\n",
|
|
"plt.xlabel(\"Predicted\")\n",
|
|
"plt.ylabel(\"Actual\")\n",
|
|
"class_labels = [\"False\", \"True\"]\n",
|
|
"tick_marks = np.arange(len(class_labels))\n",
|
|
"plt.xticks(tick_marks, class_labels)\n",
|
|
"plt.yticks([-0.5, 0, 1, 1.5], [\"\", \"False\", \"True\", \"\"])\n",
|
|
"# plotting text value inside cells\n",
|
|
"thresh = cf.max() / 2.0\n",
|
|
"for i, j in itertools.product(range(cf.shape[0]), range(cf.shape[1])):\n",
|
|
" plt.text(\n",
|
|
" j,\n",
|
|
" i,\n",
|
|
" format(cf[i, j], \"d\"),\n",
|
|
" horizontalalignment=\"center\",\n",
|
|
" color=\"white\" if cf[i, j] > thresh else \"black\",\n",
|
|
" )\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Explanation\n",
|
|
"In this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.\n",
|
|
"\n",
|
|
"Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data.\n",
|
|
"\n",
|
|
"### Run the explanation\n",
|
|
"#### Download the engineered feature importance from artifact store\n",
|
|
"You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"client = ExplanationClient.from_run(best_run)\n",
|
|
"engineered_explanations = client.download_model_explanation(raw=False)\n",
|
|
"print(engineered_explanations.get_feature_importance_dict())\n",
|
|
"print(\n",
|
|
" \"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
|
|
" + best_run.get_portal_url()\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Download the raw feature importance from artifact store\n",
|
|
"You can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"raw_explanations = client.download_model_explanation(raw=True)\n",
|
|
"print(raw_explanations.get_feature_importance_dict())\n",
|
|
"print(\n",
|
|
" \"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
|
|
" + best_run.get_portal_url()\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Retrieve any other AutoML model from training"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"automl_run, fitted_model = local_run.get_output(metric=\"accuracy\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Setup the model explanations for AutoML models\n",
|
|
"The fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-\n",
|
|
"\n",
|
|
"1. Featurized data from train samples/test samples\n",
|
|
"2. Gather engineered name lists\n",
|
|
"3. Find the classes in your labeled column in classification scenarios\n",
|
|
"\n",
|
|
"The automl_explainer_setup_obj contains all the structures from above list."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"X_train = training_data.drop_columns(columns=[label_column_name])\n",
|
|
"y_train = training_data.keep_columns(columns=[label_column_name], validate=True)\n",
|
|
"X_test = validation_data.drop_columns(columns=[label_column_name])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.train.automl.runtime.automl_explain_utilities import (\n",
|
|
" automl_setup_model_explanations,\n",
|
|
")\n",
|
|
"\n",
|
|
"automl_explainer_setup_obj = automl_setup_model_explanations(\n",
|
|
" fitted_model,\n",
|
|
" X=X_train,\n",
|
|
" X_test=X_test,\n",
|
|
" y=y_train,\n",
|
|
" task=\"classification\",\n",
|
|
" automl_run=automl_run,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Initialize the Mimic Explainer for feature importance\n",
|
|
"For explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.interpret.mimic_wrapper import MimicWrapper\n",
|
|
"\n",
|
|
"explainer = MimicWrapper(\n",
|
|
" ws,\n",
|
|
" automl_explainer_setup_obj.automl_estimator,\n",
|
|
" explainable_model=automl_explainer_setup_obj.surrogate_model,\n",
|
|
" init_dataset=automl_explainer_setup_obj.X_transform,\n",
|
|
" run=automl_explainer_setup_obj.automl_run,\n",
|
|
" features=automl_explainer_setup_obj.engineered_feature_names,\n",
|
|
" feature_maps=[automl_explainer_setup_obj.feature_map],\n",
|
|
" classes=automl_explainer_setup_obj.classes,\n",
|
|
" explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Use Mimic Explainer for computing and visualizing engineered feature importance\n",
|
|
"The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Compute the engineered explanations\n",
|
|
"engineered_explanations = explainer.explain(\n",
|
|
" [\"local\", \"global\"], eval_dataset=automl_explainer_setup_obj.X_test_transform\n",
|
|
")\n",
|
|
"print(engineered_explanations.get_feature_importance_dict())\n",
|
|
"print(\n",
|
|
" \"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
|
|
" + automl_run.get_portal_url()\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Use Mimic Explainer for computing and visualizing raw feature importance\n",
|
|
"The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Compute the raw explanations\n",
|
|
"raw_explanations = explainer.explain(\n",
|
|
" [\"local\", \"global\"],\n",
|
|
" get_raw=True,\n",
|
|
" raw_feature_names=automl_explainer_setup_obj.raw_feature_names,\n",
|
|
" eval_dataset=automl_explainer_setup_obj.X_test_transform,\n",
|
|
" raw_eval_dataset=automl_explainer_setup_obj.X_test_raw,\n",
|
|
")\n",
|
|
"print(raw_explanations.get_feature_importance_dict())\n",
|
|
"print(\n",
|
|
" \"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n",
|
|
" + automl_run.get_portal_url()\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Initialize the scoring Explainer, save and upload it for later use in scoring explanation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer\n",
|
|
"import joblib\n",
|
|
"\n",
|
|
"# Initialize the ScoringExplainer\n",
|
|
"scoring_explainer = TreeScoringExplainer(\n",
|
|
" explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]\n",
|
|
")\n",
|
|
"\n",
|
|
"# Pickle scoring explainer locally to './scoring_explainer.pkl'\n",
|
|
"scoring_explainer_file_name = \"scoring_explainer.pkl\"\n",
|
|
"with open(scoring_explainer_file_name, \"wb\") as stream:\n",
|
|
" joblib.dump(scoring_explainer, stream)\n",
|
|
"\n",
|
|
"# Upload the scoring explainer to the automl run\n",
|
|
"automl_run.upload_file(\"outputs/scoring_explainer.pkl\", scoring_explainer_file_name)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)\n",
|
|
"\n",
|
|
"We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Register trained automl model present in the 'outputs' folder in the artifacts\n",
|
|
"original_model = automl_run.register_model(\n",
|
|
" model_name=\"automl_model\", model_path=\"outputs/model.pkl\"\n",
|
|
")\n",
|
|
"scoring_explainer_model = automl_run.register_model(\n",
|
|
" model_name=\"scoring_explainer\", model_path=\"outputs/scoring_explainer.pkl\"\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Create the conda dependencies for setting up the service\n",
|
|
"\n",
|
|
"We need to download the conda dependencies using the automl_run object."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.automl.core.shared import constants\n",
|
|
"from azureml.core.environment import Environment\n",
|
|
"\n",
|
|
"automl_run.download_file(constants.CONDA_ENV_FILE_PATH, \"myenv.yml\")\n",
|
|
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
|
|
"myenv"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Write the Entry Script\n",
|
|
"Write the script that will be used to predict on your model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%%writefile score.py\n",
|
|
"import joblib\n",
|
|
"import pandas as pd\n",
|
|
"from azureml.core.model import Model\n",
|
|
"from azureml.train.automl.runtime.automl_explain_utilities import (\n",
|
|
" automl_setup_model_explanations,\n",
|
|
")\n",
|
|
"\n",
|
|
"\n",
|
|
"def init():\n",
|
|
" global automl_model\n",
|
|
" global scoring_explainer\n",
|
|
"\n",
|
|
" # Retrieve the path to the model file using the model name\n",
|
|
" # Assume original model is named original_prediction_model\n",
|
|
" automl_model_path = Model.get_model_path(\"automl_model\")\n",
|
|
" scoring_explainer_path = Model.get_model_path(\"scoring_explainer\")\n",
|
|
"\n",
|
|
" automl_model = joblib.load(automl_model_path)\n",
|
|
" scoring_explainer = joblib.load(scoring_explainer_path)\n",
|
|
"\n",
|
|
"\n",
|
|
"def run(raw_data):\n",
|
|
" data = pd.read_json(raw_data, orient=\"records\")\n",
|
|
" # Make prediction\n",
|
|
" predictions = automl_model.predict(data)\n",
|
|
" # Setup for inferencing explanations\n",
|
|
" automl_explainer_setup_obj = automl_setup_model_explanations(\n",
|
|
" automl_model, X_test=data, task=\"classification\"\n",
|
|
" )\n",
|
|
" # Retrieve model explanations for engineered explanations\n",
|
|
" engineered_local_importance_values = scoring_explainer.explain(\n",
|
|
" automl_explainer_setup_obj.X_test_transform\n",
|
|
" )\n",
|
|
" # Retrieve model explanations for raw explanations\n",
|
|
" raw_local_importance_values = scoring_explainer.explain(\n",
|
|
" automl_explainer_setup_obj.X_test_transform, get_raw=True\n",
|
|
" )\n",
|
|
" # You can return any data type as long as it is JSON-serializable\n",
|
|
" return {\n",
|
|
" \"predictions\": predictions.tolist(),\n",
|
|
" \"engineered_local_importance_values\": engineered_local_importance_values,\n",
|
|
" \"raw_local_importance_values\": raw_local_importance_values,\n",
|
|
" }"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Create the InferenceConfig \n",
|
|
"Create the inference config that will be used when deploying the model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.model import InferenceConfig\n",
|
|
"\n",
|
|
"inf_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Provision the AKS Cluster\n",
|
|
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.compute import ComputeTarget, AksCompute\n",
|
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
|
"\n",
|
|
"# Choose a name for your cluster.\n",
|
|
"aks_name = \"scoring-explain\"\n",
|
|
"\n",
|
|
"# Verify that cluster does not exist already\n",
|
|
"try:\n",
|
|
" aks_target = ComputeTarget(workspace=ws, name=aks_name)\n",
|
|
" print(\"Found existing cluster, use it.\")\n",
|
|
"except ComputeTargetException:\n",
|
|
" prov_config = AksCompute.provisioning_configuration(vm_size=\"STANDARD_D3_V2\")\n",
|
|
" aks_target = ComputeTarget.create(\n",
|
|
" workspace=ws, name=aks_name, provisioning_configuration=prov_config\n",
|
|
" )\n",
|
|
"aks_target.wait_for_completion(show_output=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Deploy web service to AKS"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Set the web service configuration (using default here)\n",
|
|
"from azureml.core.webservice import AksWebservice\n",
|
|
"from azureml.core.model import Model\n",
|
|
"\n",
|
|
"aks_config = AksWebservice.deploy_configuration()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"aks_service_name = \"model-scoring-local-aks\"\n",
|
|
"\n",
|
|
"aks_service = Model.deploy(\n",
|
|
" workspace=ws,\n",
|
|
" name=aks_service_name,\n",
|
|
" models=[scoring_explainer_model, original_model],\n",
|
|
" inference_config=inf_config,\n",
|
|
" deployment_config=aks_config,\n",
|
|
" deployment_target=aks_target,\n",
|
|
")\n",
|
|
"\n",
|
|
"aks_service.wait_for_deployment(show_output=True)\n",
|
|
"print(aks_service.state)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### View the service logs"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"aks_service.get_logs()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Consume the web service using run method to do the scoring and explanation of scoring.\n",
|
|
"We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Serialize the first row of the test data into json\n",
|
|
"X_test_json = X_test_df[:1].to_json(orient=\"records\")\n",
|
|
"print(X_test_json)\n",
|
|
"\n",
|
|
"# Call the service to get the predictions and the engineered and raw explanations\n",
|
|
"output = aks_service.run(X_test_json)\n",
|
|
"\n",
|
|
"# Print the predicted value\n",
|
|
"print(\"predictions:\\n{}\\n\".format(output[\"predictions\"]))\n",
|
|
"# Print the engineered feature importances for the predicted value\n",
|
|
"print(\n",
|
|
" \"engineered_local_importance_values:\\n{}\\n\".format(\n",
|
|
" output[\"engineered_local_importance_values\"]\n",
|
|
" )\n",
|
|
")\n",
|
|
"# Print the raw feature importances for the predicted value\n",
|
|
"print(\n",
|
|
" \"raw_local_importance_values:\\n{}\\n\".format(output[\"raw_local_importance_values\"])\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Clean up\n",
|
|
"Delete the service."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"aks_service.delete()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Acknowledgements"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This Credit Card fraud Detection dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/ and is available at: https://www.kaggle.com/mlg-ulb/creditcardfraud\n",
|
|
"\n",
|
|
"\n",
|
|
"The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Universit\u00c3\u0192\u00c2\u00a9 Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project\n",
|
|
"Please cite the following works: \n",
|
|
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tAndrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n",
|
|
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tDal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon\n",
|
|
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tDal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE\n",
|
|
"o\tDal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by G. Bontempi)\n",
|
|
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tCarcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-A\u00c3\u0192\u00c2\u00abl; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier\n",
|
|
"\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tCarcillo, Fabrizio; Le Borgne, Yann-A\u00c3\u0192\u00c2\u00abl; Caelen, Olivier; Bontempi, Gianluca. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "ratanase"
|
|
}
|
|
],
|
|
"category": "tutorial",
|
|
"compute": [
|
|
"Local"
|
|
],
|
|
"datasets": [
|
|
"creditcard"
|
|
],
|
|
"deployment": [
|
|
"None"
|
|
],
|
|
"exclude_from_index": true,
|
|
"file_extension": ".py",
|
|
"framework": [
|
|
"None"
|
|
],
|
|
"friendly_name": "Classification of credit card fraudulent transactions using Automated ML",
|
|
"index_order": 5,
|
|
"kernelspec": {
|
|
"display_name": "Python 3.8 - AzureML",
|
|
"language": "python",
|
|
"name": "python38-azureml"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.7"
|
|
},
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"tags": [
|
|
"local_run",
|
|
"AutomatedML"
|
|
],
|
|
"task": "Classification",
|
|
"version": "3.6.7"
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
} |