mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 09:37:04 -05:00
509 lines
20 KiB
Plaintext
509 lines
20 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
"\n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Explain binary classification model predictions with raw feature transformations\n",
|
|
"_**This notebook showcases how to use the Azure Machine Learning Interpretability SDK to explain and visualize a binary classification model that uses advanced many to one or many to many feature transformations.**_\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"## Table of Contents\n",
|
|
"\n",
|
|
"1. [Introduction](#Introduction)\n",
|
|
"1. [Setup](#Setup)\n",
|
|
"1. [Run model explainer locally at training time](#Explain)\n",
|
|
" 1. Apply feature transformations\n",
|
|
" 1. Train a binary classification model\n",
|
|
" 1. Explain the model on raw features\n",
|
|
" 1. Generate global explanations\n",
|
|
" 1. Generate local explanations\n",
|
|
"1. [Visualize results](#Visualize)\n",
|
|
"1. [Next steps](#Next)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Introduction\n",
|
|
"\n",
|
|
"This notebook illustrates creating explanations for a binary classification model, Titanic passenger data classification, that uses many to one and many to many feature transformations from raw data to engineered features. For the many to one transformation, we sum 2 features `age` and `fare`. For many to many transformations two features are computed: one that is product of `age` and `fare` and another that is square of this product. Our tabular data explainer is then used to get the explanation object with the flag `allow_all_transformations` passed. The object is then used to get raw feature importances.\n",
|
|
"\n",
|
|
"\n",
|
|
"We will showcase raw feature transformations with three tabular data explainers: TabularExplainer (SHAP), MimicExplainer (global surrogate), and PFIExplainer.\n",
|
|
"\n",
|
|
"|  |\n",
|
|
"|:--:|\n",
|
|
"| *Interpretability Toolkit Architecture* |\n",
|
|
"\n",
|
|
"Problem: Titanic passenger data classification with scikit-learn (run model explainer locally)\n",
|
|
"\n",
|
|
"1. Transform raw features to engineered features\n",
|
|
"2. Train a Logistic Regression model using Scikit-learn\n",
|
|
"3. Run 'explain_model' globally and locally with full dataset in local mode, which doesn't contact any Azure services.\n",
|
|
"4. Visualize the global and local explanations with the visualization dashboard.\n",
|
|
"---\n",
|
|
"\n",
|
|
"Setup: If you are using Jupyter notebooks, the extensions should be installed automatically with the package.\n",
|
|
"If you are using Jupyter Labs run the following command:\n",
|
|
"```\n",
|
|
"(myenv) $ jupyter labextension install @jupyter-widgets/jupyterlab-manager\n",
|
|
"```\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Explain\n",
|
|
"\n",
|
|
"### Run model explainer locally at training time"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from sklearn.pipeline import Pipeline\n",
|
|
"from sklearn.impute import SimpleImputer\n",
|
|
"from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
|
|
"from sklearn.linear_model import LogisticRegression\n",
|
|
"import pandas as pd\n",
|
|
"import numpy as np\n",
|
|
"\n",
|
|
"# Explainers:\n",
|
|
"# 1. SHAP Tabular Explainer\n",
|
|
"from interpret.ext.blackbox import TabularExplainer\n",
|
|
"\n",
|
|
"# OR\n",
|
|
"\n",
|
|
"# 2. Mimic Explainer\n",
|
|
"from interpret.ext.blackbox import MimicExplainer\n",
|
|
"# You can use one of the following four interpretable models as a global surrogate to the black box model\n",
|
|
"from interpret.ext.glassbox import LGBMExplainableModel\n",
|
|
"from interpret.ext.glassbox import LinearExplainableModel\n",
|
|
"from interpret.ext.glassbox import SGDExplainableModel\n",
|
|
"from interpret.ext.glassbox import DecisionTreeExplainableModel\n",
|
|
"\n",
|
|
"# OR\n",
|
|
"\n",
|
|
"# 3. PFI Explainer\n",
|
|
"from interpret.ext.blackbox import PFIExplainer "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Load the Titanic passenger data"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"titanic_url = ('https://raw.githubusercontent.com/amueller/'\n",
|
|
" 'scipy-2017-sklearn/091d371/notebooks/datasets/titanic3.csv')\n",
|
|
"data = pd.read_csv(titanic_url)\n",
|
|
"# fill missing values\n",
|
|
"data = data.fillna(method=\"ffill\")\n",
|
|
"data = data.fillna(method=\"bfill\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Similar to example [here](https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html#sphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py), use a subset of columns"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from sklearn.model_selection import train_test_split\n",
|
|
"\n",
|
|
"numeric_features = ['age', 'fare']\n",
|
|
"categorical_features = ['embarked', 'sex', 'pclass']\n",
|
|
"\n",
|
|
"y = data['survived'].values\n",
|
|
"X = data[categorical_features + numeric_features]\n",
|
|
"\n",
|
|
"# Split data into train and test\n",
|
|
"x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Transform raw features"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can explain raw features by either using a `sklearn.compose.ColumnTransformer` or a list of fitted transformer tuples. The cell below uses `sklearn.compose.ColumnTransformer`. In case you want to run the example with the list of fitted transformer tuples, comment the cell below and uncomment the cell that follows after. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# We add many to one and many to many transformations for illustration purposes.\n",
|
|
"# The support for raw feature explanations with many to one and many to many transformations are only supported \n",
|
|
"# When allow_all_transformations is set to True on explainer creation\n",
|
|
"from sklearn.preprocessing import FunctionTransformer\n",
|
|
"many_to_one_transformer = FunctionTransformer(lambda x: x.sum(axis=1).reshape(-1, 1))\n",
|
|
"many_to_many_transformer = FunctionTransformer(lambda x: np.hstack(\n",
|
|
" (np.prod(x, axis=1).reshape(-1, 1), (np.prod(x, axis=1)**2).reshape(-1, 1))\n",
|
|
"))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from sklearn.compose import ColumnTransformer\n",
|
|
"\n",
|
|
"transformations = ColumnTransformer([\n",
|
|
" (\"age_fare_1\", Pipeline(steps=[\n",
|
|
" ('imputer', SimpleImputer(strategy='median')),\n",
|
|
" ('scaler', StandardScaler())\n",
|
|
" ]), [\"age\", \"fare\"]),\n",
|
|
" (\"age_fare_2\", many_to_one_transformer, [\"age\", \"fare\"]),\n",
|
|
" (\"age_fare_3\", many_to_many_transformer, [\"age\", \"fare\"]),\n",
|
|
" (\"embarked\", Pipeline(steps=[\n",
|
|
" (\"imputer\", SimpleImputer(strategy='constant', fill_value='missing')), \n",
|
|
" (\"encoder\", OneHotEncoder(sparse=False))]), [\"embarked\"]),\n",
|
|
" (\"sex_pclass\", OneHotEncoder(sparse=False), [\"sex\", \"pclass\"]) \n",
|
|
"])\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"'''\n",
|
|
"# Uncomment below if sklearn-pandas is not installed\n",
|
|
"#!pip install sklearn-pandas\n",
|
|
"from sklearn_pandas import DataFrameMapper\n",
|
|
"\n",
|
|
"# Impute, standardize the numeric features and one-hot encode the categorical features. \n",
|
|
"\n",
|
|
"transformations = [\n",
|
|
" ([\"age\", \"fare\"], Pipeline(steps=[\n",
|
|
" ('imputer', SimpleImputer(strategy='median')),\n",
|
|
" ('scaler', StandardScaler())\n",
|
|
" ])),\n",
|
|
" ([\"age\", \"fare\"], many_to_one_transformer),\n",
|
|
" ([\"age\", \"fare\"], many_to_many_transformer),\n",
|
|
" ([\"embarked\"], Pipeline(steps=[\n",
|
|
" (\"imputer\", SimpleImputer(strategy='constant', fill_value='missing')), \n",
|
|
" (\"encoder\", OneHotEncoder(sparse=False))])),\n",
|
|
" ([\"sex\", \"pclass\"], OneHotEncoder(sparse=False)) \n",
|
|
"]\n",
|
|
"\n",
|
|
"\n",
|
|
"# Append classifier to preprocessing pipeline.\n",
|
|
"# Now we have a full prediction pipeline.\n",
|
|
"clf = Pipeline(steps=[('preprocessor', DataFrameMapper(transformations)),\n",
|
|
" ('classifier', LogisticRegression(solver='lbfgs'))])\n",
|
|
"'''"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Train a Logistic Regression model, which you want to explain"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Append classifier to preprocessing pipeline.\n",
|
|
"# Now we have a full prediction pipeline.\n",
|
|
"clf = Pipeline(steps=[('preprocessor', transformations),\n",
|
|
" ('classifier', LogisticRegression(solver='lbfgs'))])\n",
|
|
"model = clf.fit(x_train, y_train)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Explain predictions on your local machine"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 1. Using SHAP TabularExplainer\n",
|
|
"# When the last parameter allow_all_transformations is passed, we handle many to one and many to many transformations to \n",
|
|
"# generate approximations to raw feature importances. When this flag is passed, for transformations not recognized as one to \n",
|
|
"# many, we distribute feature importances evenly to raw features generating them.\n",
|
|
"# clf.steps[-1][1] returns the trained classification model\n",
|
|
"explainer = TabularExplainer(clf.steps[-1][1], \n",
|
|
" initialization_examples=x_train, \n",
|
|
" features=x_train.columns, \n",
|
|
" transformations=transformations, \n",
|
|
" allow_all_transformations=True)\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"# 2. Using MimicExplainer\n",
|
|
"# augment_data is optional and if true, oversamples the initialization examples to improve surrogate model accuracy to fit original model. Useful for high-dimensional data where the number of rows is less than the number of columns. \n",
|
|
"# max_num_of_augmentations is optional and defines max number of times we can increase the input data size.\n",
|
|
"# LGBMExplainableModel can be replaced with LinearExplainableModel, SGDExplainableModel, or DecisionTreeExplainableModel\n",
|
|
"# explainer = MimicExplainer(clf.steps[-1][1], \n",
|
|
"# x_train, \n",
|
|
"# LGBMExplainableModel, \n",
|
|
"# augment_data=True, \n",
|
|
"# max_num_of_augmentations=10, \n",
|
|
"# features=x_train.columns, \n",
|
|
"# transformations=transformations, \n",
|
|
"# allow_all_transformations=True)\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"# 3. Using PFIExplainer\n",
|
|
"\n",
|
|
"# Use the parameter \"metric\" to pass a metric name or function to evaluate the permutation. \n",
|
|
"# Note that if a metric function is provided a higher value must be better.\n",
|
|
"# Otherwise, take the negative of the function or set the parameter \"is_error_metric\" to True.\n",
|
|
"# Default metrics: \n",
|
|
"# F1 Score for binary classification, F1 Score with micro average for multiclass classification and\n",
|
|
"# Mean absolute error for regression\n",
|
|
"\n",
|
|
"\n",
|
|
"# explainer = PFIExplainer(clf.steps[-1][1], \n",
|
|
"# features=x_train.columns, \n",
|
|
"# transformations=transformations)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Generate global explanations\n",
|
|
"Explain overall model predictions (global explanation)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Passing in test dataset for evaluation examples - note it must be a representative sample of the original data\n",
|
|
"# x_train can be passed as well, but with more examples explanations will take longer although they may be more accurate\n",
|
|
"\n",
|
|
"global_explanation = explainer.explain_global(x_test)\n",
|
|
"\n",
|
|
"# Note: if you used the PFIExplainer in the previous step, use the next line of code instead\n",
|
|
"# global_explanation = explainer.explain_global(x_test, true_labels=y_test)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Sorted SHAP values\n",
|
|
"print('ranked global importance values: {}'.format(global_explanation.get_ranked_global_values()))\n",
|
|
"# Corresponding feature names\n",
|
|
"print('ranked global importance names: {}'.format(global_explanation.get_ranked_global_names()))\n",
|
|
"# Feature ranks (based on original order of features)\n",
|
|
"print('global importance rank: {}'.format(global_explanation.global_importance_rank))\n",
|
|
"# Per class feature names\n",
|
|
"print('ranked per class feature names: {}'.format(global_explanation.get_ranked_per_class_names()))\n",
|
|
"# Per class feature importance values\n",
|
|
"print('ranked per class feature values: {}'.format(global_explanation.get_ranked_per_class_values()))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Print out a dictionary that holds the sorted feature importance names and values\n",
|
|
"print('global importance rank: {}'.format(global_explanation.get_feature_importance_dict()))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Explain overall model predictions as a collection of local (instance-level) explanations"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# feature shap values for all features and all data points in the training data\n",
|
|
"print('local importance values: {}'.format(global_explanation.local_importance_values))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Generate local explanations\n",
|
|
"Explain local data points (individual instances)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Note: PFIExplainer does not support local explanations\n",
|
|
"# You can pass a specific data point or a group of data points to the explain_local function\n",
|
|
"\n",
|
|
"# E.g., Explain the first data point in the test set\n",
|
|
"instance_num = 1\n",
|
|
"local_explanation = explainer.explain_local(x_test[:instance_num])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Get the prediction for the first member of the test set and explain why model made that prediction\n",
|
|
"prediction_value = clf.predict(x_test)[instance_num]\n",
|
|
"\n",
|
|
"sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]\n",
|
|
"sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]\n",
|
|
"\n",
|
|
"print('local importance values: {}'.format(sorted_local_importance_values))\n",
|
|
"print('local importance names: {}'.format(sorted_local_importance_names))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Visualize\n",
|
|
"Load the visualization dashboard"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.contrib.interpret.visualize import ExplanationDashboard"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"ExplanationDashboard(global_explanation, model, x_test)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Next\n",
|
|
"Learn about other use cases of the explain package on a:\n",
|
|
" \n",
|
|
"1. [Training time: regression problem](./explain-regression-local.ipynb)\n",
|
|
"1. [Training time: binary classification problem](./explain-binary-classification-local.ipynb)\n",
|
|
"1. [Training time: multiclass classification problem](./explain-multiclass-classification-local.ipynb)\n",
|
|
"1. [Explain models with simple feature transformations](./simple-feature-transformations-explain-local.ipynb)\n",
|
|
"1. [Save model explanations via Azure Machine Learning Run History](../azure-integration/run-history/save-retrieve-explanations-run-history.ipynb)\n",
|
|
"1. [Run explainers remotely on Azure Machine Learning Compute (AMLCompute)](../azure-integration/remote-explanation/explain-model-on-amlcompute.ipynb)\n",
|
|
"1. Inferencing time: deploy a classification model and explainer:\n",
|
|
" 1. [Deploy a locally-trained model and explainer](../azure-integration/scoring-time/train-explain-model-locally-and-deploy.ipynb)\n",
|
|
" 1. [Deploy a remotely-trained model and explainer](../azure-integration/scoring-time/train-explain-model-on-amlcompute-and-deploy.ipynb)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "mesameki"
|
|
}
|
|
],
|
|
"kernelspec": {
|
|
"display_name": "Python 3.6",
|
|
"language": "python",
|
|
"name": "python36"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.8"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
} |