Compare commits

...

1 Commits

Author SHA1 Message Date
amlrelsa-ms
5da664d65c update samples from Release-80 as a part of 1.27.0 SDK stable release 2021-04-19 16:01:25 +00:00
29 changed files with 184 additions and 41 deletions

View File

@@ -103,7 +103,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -36,9 +36,9 @@
"\n",
"<a id=\"Introduction\"></a>\n",
"## Introduction\n",
"This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).\n",
"This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.org) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.org/).\n",
"\n",
"We will apply the [grid search algorithm](https://fairlearn.github.io/master/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n",
"We will apply the [grid search algorithm](https://fairlearn.org/v0.4.6/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n",
"\n",
"### Setup\n",
"\n",
@@ -48,7 +48,7 @@
"* `azureml-contrib-fairness`\n",
"* `fairlearn==0.4.6` (v0.5.0 will work with minor modifications)\n",
"* `joblib`\n",
"* `shap`\n",
"* `liac-arff`\n",
"\n",
"Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:"
]
@@ -88,7 +88,6 @@
"from fairlearn.widget import FairlearnDashboard\n",
"\n",
"from sklearn.compose import ColumnTransformer\n",
"from sklearn.datasets import fetch_openml\n",
"from sklearn.impute import SimpleImputer\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.model_selection import train_test_split\n",
@@ -112,9 +111,9 @@
"metadata": {},
"outputs": [],
"source": [
"from fairness_nb_utils import fetch_openml_with_retries\n",
"from fairness_nb_utils import fetch_census_dataset\n",
"\n",
"data = fetch_openml_with_retries(data_id=1590)\n",
"data = fetch_census_dataset()\n",
" \n",
"# Extract the items we want\n",
"X_raw = data.data\n",
@@ -137,7 +136,7 @@
"outputs": [],
"source": [
"A = X_raw[['sex','race']]\n",
"X_raw = X_raw.drop(labels=['sex', 'race'],axis = 1)"
"X_raw = X_raw.drop(labels=['sex', 'race'], axis = 1)"
]
},
{
@@ -584,7 +583,7 @@
"<a id=\"Conclusion\"></a>\n",
"## Conclusion\n",
"\n",
"In this notebook we have demonstrated how to use the `GridSearch` algorithm from Fairlearn to generate a collection of models, and then present them in the fairness dashboard in Azure Machine Learning Studio. Please remember that this notebook has not attempted to discuss the many considerations which should be part of any approach to unfairness mitigation. The [Fairlearn website](http://fairlearn.github.io/) provides that discussion"
"In this notebook we have demonstrated how to use the `GridSearch` algorithm from Fairlearn to generate a collection of models, and then present them in the fairness dashboard in Azure Machine Learning Studio. Please remember that this notebook has not attempted to discuss the many considerations which should be part of any approach to unfairness mitigation. The [Fairlearn website](http://fairlearn.org/) provides that discussion"
]
},
{

View File

@@ -5,3 +5,4 @@ dependencies:
- azureml-contrib-fairness
- fairlearn==0.4.6
- joblib
- liac-arff

View File

@@ -4,7 +4,13 @@
"""Utilities for azureml-contrib-fairness notebooks."""
import arff
from collections import OrderedDict
from contextlib import closing
import gzip
import pandas as pd
from sklearn.datasets import fetch_openml
from sklearn.utils import Bunch
import time
@@ -26,3 +32,62 @@ def fetch_openml_with_retries(data_id, max_retries=4, retry_delay=60):
raise RuntimeError("Unable to download dataset from OpenML")
return data
_categorical_columns = [
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
]
def fetch_census_dataset():
"""Fetch the Adult Census Dataset
This uses a particular URL for the Adult Census dataset. The code
is a simplified version of fetch_openml() in sklearn.
The data are copied from:
https://openml.org/data/v1/download/1595261.gz
(as of 2021-03-31)
"""
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
filename = "1595261.gz"
data_url = "https://rainotebookscdn.blob.core.windows.net/datasets/"
urlretrieve(data_url + filename, filename)
http_stream = gzip.GzipFile(filename=filename, mode='rb')
with closing(http_stream):
def _stream_generator(response):
for line in response:
yield line.decode('utf-8')
stream = _stream_generator(http_stream)
data = arff.load(stream)
attributes = OrderedDict(data['attributes'])
arff_columns = list(attributes)
raw_df = pd.DataFrame(data=data['data'], columns=arff_columns)
target_column_name = 'class'
target = raw_df.pop(target_column_name)
for col_name in _categorical_columns:
dtype = pd.api.types.CategoricalDtype(attributes[col_name])
raw_df[col_name] = raw_df[col_name].astype(dtype, copy=False)
result = Bunch()
result.data = raw_df
result.target = target
return result

View File

@@ -50,7 +50,7 @@
"* `azureml-contrib-fairness`\n",
"* `fairlearn==0.4.6` (should also work with v0.5.0)\n",
"* `joblib`\n",
"* `shap`\n",
"* `liac-arff`\n",
"\n",
"Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:"
]
@@ -88,7 +88,6 @@
"source": [
"from sklearn import svm\n",
"from sklearn.compose import ColumnTransformer\n",
"from sklearn.datasets import fetch_openml\n",
"from sklearn.impute import SimpleImputer\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.model_selection import train_test_split\n",
@@ -110,9 +109,9 @@
"metadata": {},
"outputs": [],
"source": [
"from fairness_nb_utils import fetch_openml_with_retries\n",
"from fairness_nb_utils import fetch_census_dataset\n",
"\n",
"data = fetch_openml_with_retries(data_id=1590)\n",
"data = fetch_census_dataset()\n",
" \n",
"# Extract the items we want\n",
"X_raw = data.data\n",

View File

@@ -5,3 +5,4 @@ dependencies:
- azureml-contrib-fairness
- fairlearn==0.4.6
- joblib
- liac-arff

View File

@@ -21,8 +21,8 @@ dependencies:
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets~=1.26.0
- azureml-widgets~=1.27.0
- pytorch-transformers==1.0.0
- spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.26.0/validated_win32_requirements.txt [--no-deps]
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.27.0/validated_win32_requirements.txt [--no-deps]

View File

@@ -21,8 +21,8 @@ dependencies:
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets~=1.26.0
- azureml-widgets~=1.27.0
- pytorch-transformers==1.0.0
- spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.26.0/validated_linux_requirements.txt [--no-deps]
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.27.0/validated_linux_requirements.txt [--no-deps]

View File

@@ -22,8 +22,8 @@ dependencies:
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-widgets~=1.26.0
- azureml-widgets~=1.27.0
- pytorch-transformers==1.0.0
- spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.26.0/validated_darwin_requirements.txt [--no-deps]
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.27.0/validated_darwin_requirements.txt [--no-deps]

View File

@@ -105,7 +105,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -93,7 +93,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -96,7 +96,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -281,7 +281,7 @@
"outputs": [],
"source": [
"automl_settings = {\n",
" \"experiment_timeout_minutes\": 20,\n",
" \"experiment_timeout_minutes\": 30,\n",
" \"primary_metric\": 'accuracy',\n",
" \"max_concurrent_iterations\": num_nodes, \n",
" \"max_cores_per_iteration\": -1,\n",

View File

@@ -81,7 +81,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -91,7 +91,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -113,7 +113,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -87,7 +87,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -97,7 +97,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -94,7 +94,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -82,7 +82,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -96,7 +96,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -96,7 +96,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},
@@ -439,7 +439,7 @@
"\n",
"### Retrieve any AutoML Model for explanations\n",
"\n",
"Below we select the some AutoML pipeline from our iterations. The `get_output` method returns the a AutoML run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
"Below we select an AutoML pipeline from our iterations. The `get_output` method returns the a AutoML run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for any logged `metric` or for a particular `iteration`."
]
},
{
@@ -448,7 +448,8 @@
"metadata": {},
"outputs": [],
"source": [
"automl_run, fitted_model = remote_run.get_output(metric='r2_score')"
"#automl_run, fitted_model = remote_run.get_output(metric='r2_score')\n",
"automl_run, fitted_model = remote_run.get_output(iteration=2)"
]
},
{

View File

@@ -92,7 +92,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -388,6 +388,7 @@
"from azureml.core.webservice import AciWebservice\n",
"from azureml.core.model import Model\n",
"from azureml.core.environment import Environment\n",
"from azureml.exceptions import WebserviceException\n",
"\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, \n",

View File

@@ -486,6 +486,7 @@
"from azureml.core.webservice import AciWebservice\n",
"from azureml.core.model import Model\n",
"from azureml.core.environment import Environment\n",
"from azureml.exceptions import WebserviceException\n",
"\n",
"\n",
"aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, \n",

View File

@@ -72,7 +72,6 @@
"from fairlearn.reductions import GridSearch\n",
"from fairlearn.reductions import DemographicParity, ErrorRate\n",
"\n",
"from sklearn import svm, neighbors, tree\n",
"from sklearn.compose import ColumnTransformer, make_column_selector\n",
"from sklearn.preprocessing import LabelEncoder,StandardScaler\n",
"from sklearn.linear_model import LogisticRegression\n",
@@ -81,10 +80,8 @@
"from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
"from sklearn.svm import SVC\n",
"from sklearn.metrics import accuracy_score\n",
"from sklearn.datasets import fetch_openml\n",
"\n",
"import pandas as pd\n",
"import numpy as np\n",
"\n",
"# SHAP Tabular Explainer\n",
"from interpret.ext.blackbox import KernelExplainer\n",
@@ -105,7 +102,9 @@
"metadata": {},
"outputs": [],
"source": [
"dataset = fetch_openml(data_id=1590, as_frame=True)\n",
"from utilities import fetch_census_dataset\n",
"\n",
"dataset = fetch_census_dataset()\n",
"X_raw, y = dataset['data'], dataset['target']\n",
"X_raw[\"race\"].value_counts().to_dict()"
]

View File

@@ -9,3 +9,4 @@ dependencies:
- azureml-dataset-runtime
- ipywidgets
- raiwidgets
- liac-arff

View File

@@ -0,0 +1,75 @@
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
"""Utilities for azureml-contrib-fairness notebooks."""
import arff
from collections import OrderedDict
from contextlib import closing
import gzip
import pandas as pd
from sklearn.utils import Bunch
def _is_gzip_encoded(_fsrc):
return _fsrc.info().get('Content-Encoding', '') == 'gzip'
_categorical_columns = [
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
]
def fetch_census_dataset():
"""Fetch the Adult Census Dataset
This uses a particular URL for the Adult Census dataset. The code
is a simplified version of fetch_openml() in sklearn.
The data are copied from:
https://openml.org/data/v1/download/1595261.gz
(as of 2021-03-31)
"""
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
filename = "1595261.gz"
data_url = "https://rainotebookscdn.blob.core.windows.net/datasets/"
urlretrieve(data_url + filename, filename)
http_stream = gzip.GzipFile(filename=filename, mode='rb')
with closing(http_stream):
def _stream_generator(response):
for line in response:
yield line.decode('utf-8')
stream = _stream_generator(http_stream)
data = arff.load(stream)
attributes = OrderedDict(data['attributes'])
arff_columns = list(attributes)
raw_df = pd.DataFrame(data=data['data'], columns=arff_columns)
target_column_name = 'class'
target = raw_df.pop(target_column_name)
for col_name in _categorical_columns:
dtype = pd.api.types.CategoricalDtype(attributes[col_name])
raw_df[col_name] = raw_df[col_name].astype(dtype, copy=False)
result = Bunch()
result.data = raw_df
result.target = target
return result

View File

@@ -100,7 +100,7 @@
"\n",
"# Check core SDK version number\n",
"\n",
"print(\"This notebook was created using SDK version 1.26.0, you are currently running version\", azureml.core.VERSION)"
"print(\"This notebook was created using SDK version 1.27.0, you are currently running version\", azureml.core.VERSION)"
]
},
{

View File

@@ -102,7 +102,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.26.0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.27.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},