diff --git a/contrib/RAPIDS/README.md b/contrib/RAPIDS/README.md index 8c4386eb..f8ab2cbd 100644 --- a/contrib/RAPIDS/README.md +++ b/contrib/RAPIDS/README.md @@ -6,21 +6,6 @@ After following the directions, the user should end up setting a conda environme The user would also require an Azure Subscription with a Machine Learning Services quota on the desired region for 24 nodes or more (to be able to select a vmSize with 4 GPUs as it is used on the Notebook) on the desired VM family ([NC\_v3](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv3-series), [NC\_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ncv2-series), [ND](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#nd-series) or [ND_v2](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-gpu#ndv2-series-preview)), the specific vmSize to be used within the chosen family would also need to be whitelisted for Machine Learning Services usage. -  -The following examples are available: - -## 1) RAPIDS Hyperparameter Optimization (HPO) on AzureML - -This example is available from: https://github.com/Azure/azureml-examples/tree/main/tutorials/using-rapids, and will walk you through single GPU and single-node multi-GPU accelerated HPO jobs on AzureML. You will be able to train and evaluate models with many different variations of key parameters in order to find the combination that yields the highest accuracy. - -## 2) RAPIDS Multi-Node Multi-GPU Training using Dask Cloud Provider - -This notebook: https://github.com/rapidsai/cloud-ml-examples/blob/main/azure/notebooks/azure_mnmg.ipynb will use the [AzureVMCluster](https://cloudprovider.dask.org/en/latest/azure.html#azurevm) class from [Dask Cloud Provider](https://cloudprovider.dask.org/en/latest/) to set up a Dask cluster on Azure VM instances and train a multi-node multi-GPU Random Forest model. - -## 3) RAPIDS End-to-End (E2E) Mortgage Workflow - -The example below will use a dataset from [Fannie Mae’s Single-Family Loan Performance Data](http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html) and the processed dataset is available at [RAPIDS Datasets Homepage](https://docs.rapids.ai/datasets/mortgage-data), which is redistributed with permission and consent from Fannie Mae (note the example below has not been updated with the latest version of RAPIDS, recent examples are available in the repositories mentioned above). -   ### Getting and running the material Clone the AzureML Notebooks repository in GitHub by running the following command on a local_directory: @@ -99,7 +84,7 @@ The successful creation of the compute target would have an output like the foll ![](imgs/targetsuccess.png)   #### RAPIDS script uploading and viewing -The next step copies the RAPIDS script process_data.py, which is a slightly modified implementation of the [RAPIDS E2E example](https://github.com/rapidsai-community/notebooks-contrib/blob/branch-0.14/intermediate_notebooks/E2E/mortgage/mortgage_e2e.ipynb), into a script processing folder and it presents its contents to the user. (The script is discussed in the next section in detail). +The next step copies the RAPIDS script process_data.py, which is a slightly modified implementation of the [RAPIDS E2E example](https://github.com/rapidsai/notebooks/blob/master/mortgage/E2E.ipynb), into a script processing folder and it presents its contents to the user. (The script is discussed in the next section in detail). If the user wants to use a different RAPIDS script, the references to the process_data.py script have to be changed ![](imgs/scriptuploading.png) diff --git a/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.yml b/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.yml deleted file mode 100644 index 0f30214b..00000000 --- a/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-classification-bank-marketing-all-features -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.yml b/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.yml deleted file mode 100644 index 148f33d5..00000000 --- a/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-classification-credit-card-fraud -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.yml b/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.yml deleted file mode 100644 index 4c952264..00000000 --- a/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-classification-text-dnn -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.yml b/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.yml deleted file mode 100644 index 9b05ea1f..00000000 --- a/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-continuous-retraining -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/experimental/regression-model-proxy/auto-ml-regression-model-proxy.yml b/how-to-use-azureml/automated-machine-learning/experimental/regression-model-proxy/auto-ml-regression-model-proxy.yml deleted file mode 100644 index e5d127ea..00000000 --- a/how-to-use-azureml/automated-machine-learning/experimental/regression-model-proxy/auto-ml-regression-model-proxy.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-regression-model-proxy -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-beer-remote/auto-ml-forecasting-beer-remote.yml b/how-to-use-azureml/automated-machine-learning/forecasting-beer-remote/auto-ml-forecasting-beer-remote.yml deleted file mode 100644 index 103560d8..00000000 --- a/how-to-use-azureml/automated-machine-learning/forecasting-beer-remote/auto-ml-forecasting-beer-remote.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-forecasting-beer-remote -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.yml b/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.yml deleted file mode 100644 index 70a3271c..00000000 --- a/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-forecasting-bike-share -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.yml b/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.yml deleted file mode 100644 index 13bd78f8..00000000 --- a/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-forecasting-energy-demand -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.yml b/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.yml deleted file mode 100644 index 144797d6..00000000 --- a/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-forecasting-function -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.yml b/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.yml deleted file mode 100644 index a6cc3e71..00000000 --- a/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-forecasting-orange-juice-sales -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.yml b/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.yml deleted file mode 100644 index 6c817042..00000000 --- a/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-classification-credit-card-fraud-local -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.yml b/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.yml deleted file mode 100644 index 9db24f2b..00000000 --- a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-regression-explanation-featurization -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.yml b/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.yml deleted file mode 100644 index 4e84e13a..00000000 --- a/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: auto-ml-regression -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/deployment/deploy-multi-model/multi-model-register-and-deploy.yml b/how-to-use-azureml/deployment/deploy-multi-model/multi-model-register-and-deploy.yml deleted file mode 100644 index 497c43b4..00000000 --- a/how-to-use-azureml/deployment/deploy-multi-model/multi-model-register-and-deploy.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: multi-model-register-and-deploy -dependencies: -- pip: - - azureml-sdk - - numpy - - scikit-learn diff --git a/how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.yml b/how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.yml deleted file mode 100644 index 99efaf15..00000000 --- a/how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: model-register-and-deploy -dependencies: -- pip: - - azureml-sdk - - numpy - - scikit-learn diff --git a/how-to-use-azureml/deployment/deploy-with-controlled-rollout/deploy-aks-with-controlled-rollout.yml b/how-to-use-azureml/deployment/deploy-with-controlled-rollout/deploy-aks-with-controlled-rollout.yml deleted file mode 100644 index 0bf2d3a4..00000000 --- a/how-to-use-azureml/deployment/deploy-with-controlled-rollout/deploy-aks-with-controlled-rollout.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: deploy-aks-with-controlled-rollout -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.yml b/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.yml deleted file mode 100644 index c716614a..00000000 --- a/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: enable-app-insights-in-production-service -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.yml b/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.yml deleted file mode 100644 index e8c7ffb9..00000000 --- a/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.yml +++ /dev/null @@ -1,8 +0,0 @@ -name: onnx-convert-aml-deploy-tinyyolo -dependencies: -- pip: - - azureml-sdk - - numpy - - git+https://github.com/apple/coremltools@v2.1 - - onnx<1.7.0 - - onnxmltools diff --git a/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.yml b/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.yml deleted file mode 100644 index 59a17443..00000000 --- a/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.yml +++ /dev/null @@ -1,9 +0,0 @@ -name: onnx-inference-facial-expression-recognition-deploy -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - matplotlib - - numpy - - onnx<1.7.0 - - opencv-python-headless diff --git a/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.yml b/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.yml deleted file mode 100644 index 97f9e8b5..00000000 --- a/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.yml +++ /dev/null @@ -1,9 +0,0 @@ -name: onnx-inference-mnist-deploy -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - matplotlib - - numpy - - onnx<1.7.0 - - opencv-python-headless diff --git a/how-to-use-azureml/deployment/onnx/onnx-model-register-and-deploy.yml b/how-to-use-azureml/deployment/onnx/onnx-model-register-and-deploy.yml deleted file mode 100644 index 4671dc87..00000000 --- a/how-to-use-azureml/deployment/onnx/onnx-model-register-and-deploy.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: onnx-model-register-and-deploy -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.yml b/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.yml deleted file mode 100644 index a80e6bb0..00000000 --- a/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: onnx-modelzoo-aml-deploy-resnet50 -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.yml b/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.yml deleted file mode 100644 index c0145ec0..00000000 --- a/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: onnx-train-pytorch-aml-deploy-mnist -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/deployment/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.yml b/how-to-use-azureml/deployment/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.yml deleted file mode 100644 index c2afb644..00000000 --- a/how-to-use-azureml/deployment/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: production-deploy-to-aks-gpu -dependencies: -- pip: - - azureml-sdk - - tensorflow diff --git a/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks-ssl.yml b/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks-ssl.yml deleted file mode 100644 index ee14e9a6..00000000 --- a/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks-ssl.yml +++ /dev/null @@ -1,8 +0,0 @@ -name: production-deploy-to-aks-ssl -dependencies: -- pip: - - azureml-sdk - - matplotlib - - tqdm - - scipy - - sklearn diff --git a/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.yml b/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.yml deleted file mode 100644 index addf41c7..00000000 --- a/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.yml +++ /dev/null @@ -1,8 +0,0 @@ -name: production-deploy-to-aks -dependencies: -- pip: - - azureml-sdk - - matplotlib - - tqdm - - scipy - - sklearn diff --git a/how-to-use-azureml/deployment/spark/model-register-and-deploy-spark.yml b/how-to-use-azureml/deployment/spark/model-register-and-deploy-spark.yml deleted file mode 100644 index 8414fbb0..00000000 --- a/how-to-use-azureml/deployment/spark/model-register-and-deploy-spark.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: model-register-and-deploy-spark -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/explain-model/azure-integration/remote-explanation/explain-model-on-amlcompute.yml b/how-to-use-azureml/explain-model/azure-integration/remote-explanation/explain-model-on-amlcompute.yml deleted file mode 100644 index e74a5682..00000000 --- a/how-to-use-azureml/explain-model/azure-integration/remote-explanation/explain-model-on-amlcompute.yml +++ /dev/null @@ -1,11 +0,0 @@ -name: explain-model-on-amlcompute -dependencies: -- pip: - - azureml-sdk - - azureml-interpret - - interpret-community[visualization] - - matplotlib - - azureml-contrib-interpret - - sklearn-pandas<2.0.0 - - azureml-dataset-runtime - - ipywidgets diff --git a/how-to-use-azureml/explain-model/azure-integration/run-history/save-retrieve-explanations-run-history.yml b/how-to-use-azureml/explain-model/azure-integration/run-history/save-retrieve-explanations-run-history.yml deleted file mode 100644 index ff76d75f..00000000 --- a/how-to-use-azureml/explain-model/azure-integration/run-history/save-retrieve-explanations-run-history.yml +++ /dev/null @@ -1,9 +0,0 @@ -name: save-retrieve-explanations-run-history -dependencies: -- pip: - - azureml-sdk - - azureml-interpret - - interpret-community[visualization] - - matplotlib - - azureml-contrib-interpret - - ipywidgets diff --git a/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-locally-and-deploy.yml b/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-locally-and-deploy.yml deleted file mode 100644 index 0aaf861f..00000000 --- a/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-locally-and-deploy.yml +++ /dev/null @@ -1,10 +0,0 @@ -name: train-explain-model-locally-and-deploy -dependencies: -- pip: - - azureml-sdk - - azureml-interpret - - interpret-community[visualization] - - matplotlib - - azureml-contrib-interpret - - sklearn-pandas<2.0.0 - - ipywidgets diff --git a/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-on-amlcompute-and-deploy.yml b/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-on-amlcompute-and-deploy.yml deleted file mode 100644 index 6407631a..00000000 --- a/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-on-amlcompute-and-deploy.yml +++ /dev/null @@ -1,12 +0,0 @@ -name: train-explain-model-on-amlcompute-and-deploy -dependencies: -- pip: - - azureml-sdk - - azureml-interpret - - interpret-community[visualization] - - matplotlib - - azureml-contrib-interpret - - sklearn-pandas<2.0.0 - - azureml-dataset-runtime - - azureml-core - - ipywidgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-data-transfer.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-data-transfer.yml deleted file mode 100644 index 4955c060..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-data-transfer.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: aml-pipelines-data-transfer -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.yml deleted file mode 100644 index 3c4f7d04..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: aml-pipelines-getting-started -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-estimatorstep.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-estimatorstep.yml deleted file mode 100644 index f6ab4807..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-estimatorstep.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: aml-pipelines-how-to-use-estimatorstep -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-modulestep.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-modulestep.yml deleted file mode 100644 index 2b63300b..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-modulestep.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: aml-pipelines-how-to-use-modulestep -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-pipeline-drafts.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-pipeline-drafts.yml deleted file mode 100644 index 198569b0..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-pipeline-drafts.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: aml-pipelines-how-to-use-pipeline-drafts -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-parameter-tuning-with-hyperdrive.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-parameter-tuning-with-hyperdrive.yml deleted file mode 100644 index 95339947..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-parameter-tuning-with-hyperdrive.yml +++ /dev/null @@ -1,9 +0,0 @@ -name: aml-pipelines-parameter-tuning-with-hyperdrive -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - matplotlib - - numpy - - pandas_ml - - azureml-dataset-runtime[pandas,fuse] diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.yml deleted file mode 100644 index ec9828ff..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: aml-pipelines-publish-and-run-using-rest-endpoint -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - requests diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-schedule-for-a-published-pipeline.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-schedule-for-a-published-pipeline.yml deleted file mode 100644 index f35bb648..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-schedule-for-a-published-pipeline.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: aml-pipelines-setup-schedule-for-a-published-pipeline -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-versioned-pipeline-endpoints.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-versioned-pipeline-endpoints.yml deleted file mode 100644 index aae504eb..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-setup-versioned-pipeline-endpoints.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: aml-pipelines-setup-versioned-pipeline-endpoints -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - requests diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.yml deleted file mode 100644 index 0463f025..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: aml-pipelines-showcasing-datapath-and-pipelineparameter -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-dataset-and-pipelineparameter.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-dataset-and-pipelineparameter.yml deleted file mode 100644 index 0c5c948c..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-dataset-and-pipelineparameter.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: aml-pipelines-showcasing-dataset-and-pipelineparameter -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-with-automated-machine-learning-step.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-with-automated-machine-learning-step.yml deleted file mode 100644 index fbfd7737..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-with-automated-machine-learning-step.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: aml-pipelines-with-automated-machine-learning-step -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-with-data-dependency-steps.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-with-data-dependency-steps.yml deleted file mode 100644 index 0f034033..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-with-data-dependency-steps.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: aml-pipelines-with-data-dependency-steps -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-with-notebook-runner-step.yml b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-with-notebook-runner-step.yml deleted file mode 100644 index 3585659d..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-with-notebook-runner-step.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: aml-pipelines-with-notebook-runner-step -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - azureml-contrib-notebook diff --git a/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.yml b/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.yml deleted file mode 100644 index 12b58a21..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.yml +++ /dev/null @@ -1,10 +0,0 @@ -name: nyc-taxi-data-regression-model-building -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - azureml-opendatasets - - azureml-train-automl - - matplotlib - - pandas - - pyarrow diff --git a/how-to-use-azureml/machine-learning-pipelines/parallel-run/file-dataset-image-inference-mnist.yml b/how-to-use-azureml/machine-learning-pipelines/parallel-run/file-dataset-image-inference-mnist.yml deleted file mode 100644 index 5ddece97..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/parallel-run/file-dataset-image-inference-mnist.yml +++ /dev/null @@ -1,7 +0,0 @@ -name: file-dataset-image-inference-mnist -dependencies: -- pip: - - azureml-sdk - - azureml-pipeline-steps - - azureml-widgets - - pandas diff --git a/how-to-use-azureml/machine-learning-pipelines/parallel-run/tabular-dataset-inference-iris.yml b/how-to-use-azureml/machine-learning-pipelines/parallel-run/tabular-dataset-inference-iris.yml deleted file mode 100644 index 9bdf3735..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/parallel-run/tabular-dataset-inference-iris.yml +++ /dev/null @@ -1,7 +0,0 @@ -name: tabular-dataset-inference-iris -dependencies: -- pip: - - azureml-sdk - - azureml-pipeline-steps - - azureml-widgets - - pandas diff --git a/how-to-use-azureml/machine-learning-pipelines/pipeline-style-transfer/pipeline-style-transfer-parallel-run.yml b/how-to-use-azureml/machine-learning-pipelines/pipeline-style-transfer/pipeline-style-transfer-parallel-run.yml deleted file mode 100644 index a7671a43..00000000 --- a/how-to-use-azureml/machine-learning-pipelines/pipeline-style-transfer/pipeline-style-transfer-parallel-run.yml +++ /dev/null @@ -1,7 +0,0 @@ -name: pipeline-style-transfer-parallel-run -dependencies: -- pip: - - azureml-sdk - - azureml-pipeline-steps - - azureml-widgets - - requests diff --git a/how-to-use-azureml/ml-frameworks/chainer/distributed-chainer/distributed-chainer.yml b/how-to-use-azureml/ml-frameworks/chainer/distributed-chainer/distributed-chainer.yml deleted file mode 100644 index 0c2ef761..00000000 --- a/how-to-use-azureml/ml-frameworks/chainer/distributed-chainer/distributed-chainer.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: distributed-chainer -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.yml b/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.yml deleted file mode 100644 index 6024bba0..00000000 --- a/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.yml +++ /dev/null @@ -1,12 +0,0 @@ -name: train-hyperparameter-tune-deploy-with-chainer -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - numpy - - matplotlib - - json - - urllib - - gzip - - struct - - requests diff --git a/how-to-use-azureml/ml-frameworks/fastai/fastai-with-custom-docker/fastai-with-custom-docker.yml b/how-to-use-azureml/ml-frameworks/fastai/fastai-with-custom-docker/fastai-with-custom-docker.yml deleted file mode 100644 index 3e5f80ae..00000000 --- a/how-to-use-azureml/ml-frameworks/fastai/fastai-with-custom-docker/fastai-with-custom-docker.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: fastai-with-custom-docker -dependencies: -- pip: - - azureml-sdk - - fastai==1.0.61 diff --git a/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.yml b/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.yml deleted file mode 100644 index 8fa4d352..00000000 --- a/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.yml +++ /dev/null @@ -1,8 +0,0 @@ -name: train-hyperparameter-tune-deploy-with-keras -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - tensorflow - - keras<=2.3.1 - - matplotlib diff --git a/how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.yml b/how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.yml deleted file mode 100644 index 58bb77d8..00000000 --- a/how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: distributed-pytorch-with-horovod -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-nccl-gloo/distributed-pytorch-with-nccl-gloo.yml b/how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-nccl-gloo/distributed-pytorch-with-nccl-gloo.yml deleted file mode 100644 index a960ad7e..00000000 --- a/how-to-use-azureml/ml-frameworks/pytorch/distributed-pytorch-with-nccl-gloo/distributed-pytorch-with-nccl-gloo.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: distributed-pytorch-with-nccl-gloo -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.yml b/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.yml deleted file mode 100644 index c04135a1..00000000 --- a/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.yml +++ /dev/null @@ -1,10 +0,0 @@ -name: train-hyperparameter-tune-deploy-with-pytorch -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - pillow==5.4.1 - - matplotlib - - numpy==1.19.3 - - https://download.pytorch.org/whl/cpu/torch-1.6.0%2Bcpu-cp36-cp36m-win_amd64.whl - - https://download.pytorch.org/whl/cpu/torchvision-0.7.0%2Bcpu-cp36-cp36m-win_amd64.whl diff --git a/how-to-use-azureml/ml-frameworks/scikit-learn/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-deploy-with-sklearn.yml b/how-to-use-azureml/ml-frameworks/scikit-learn/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-deploy-with-sklearn.yml deleted file mode 100644 index 2691a849..00000000 --- a/how-to-use-azureml/ml-frameworks/scikit-learn/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-deploy-with-sklearn.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: train-hyperparameter-tune-deploy-with-sklearn -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - numpy diff --git a/how-to-use-azureml/ml-frameworks/tensorflow/distributed-tensorflow-with-horovod/distributed-tensorflow-with-horovod.yml b/how-to-use-azureml/ml-frameworks/tensorflow/distributed-tensorflow-with-horovod/distributed-tensorflow-with-horovod.yml deleted file mode 100644 index 3fbd7704..00000000 --- a/how-to-use-azureml/ml-frameworks/tensorflow/distributed-tensorflow-with-horovod/distributed-tensorflow-with-horovod.yml +++ /dev/null @@ -1,11 +0,0 @@ -name: distributed-tensorflow-with-horovod -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - keras - - tensorflow-gpu==1.13.2 - - horovod==0.19.1 - - matplotlib - - pandas - - fuse diff --git a/how-to-use-azureml/ml-frameworks/tensorflow/distributed-tensorflow-with-parameter-server/distributed-tensorflow-with-parameter-server.yml b/how-to-use-azureml/ml-frameworks/tensorflow/distributed-tensorflow-with-parameter-server/distributed-tensorflow-with-parameter-server.yml deleted file mode 100644 index bc5a30eb..00000000 --- a/how-to-use-azureml/ml-frameworks/tensorflow/distributed-tensorflow-with-parameter-server/distributed-tensorflow-with-parameter-server.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: distributed-tensorflow-with-parameter-server -dependencies: -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.yml b/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.yml deleted file mode 100644 index 76b7eabc..00000000 --- a/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.yml +++ /dev/null @@ -1,12 +0,0 @@ -name: train-hyperparameter-tune-deploy-with-tensorflow -dependencies: -- numpy -- matplotlib -- pip: - - azureml-sdk - - azureml-widgets - - pandas - - keras - - tensorflow==2.0.0 - - matplotlib - - fuse diff --git a/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.yml b/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.yml deleted file mode 100644 index 29c57633..00000000 --- a/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.yml +++ /dev/null @@ -1,7 +0,0 @@ -name: pong_rllib -dependencies: -- pip: - - azureml-sdk - - azureml-contrib-reinforcementlearning - - azureml-widgets - - matplotlib diff --git a/how-to-use-azureml/reinforcement-learning/cartpole-on-compute-instance/cartpole_ci.yml b/how-to-use-azureml/reinforcement-learning/cartpole-on-compute-instance/cartpole_ci.yml deleted file mode 100644 index c5a2ed39..00000000 --- a/how-to-use-azureml/reinforcement-learning/cartpole-on-compute-instance/cartpole_ci.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: cartpole_ci -dependencies: -- pip: - - azureml-sdk - - azureml-contrib-reinforcementlearning - - azureml-widgets diff --git a/how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.yml b/how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.yml deleted file mode 100644 index 48d5edfa..00000000 --- a/how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: cartpole_sc -dependencies: -- pip: - - azureml-sdk - - azureml-contrib-reinforcementlearning - - azureml-widgets diff --git a/how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.yml b/how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.yml deleted file mode 100644 index d801d77e..00000000 --- a/how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.yml +++ /dev/null @@ -1,8 +0,0 @@ -name: minecraft -dependencies: -- pip: - - azureml-sdk - - azureml-contrib-reinforcementlearning - - azureml-widgets - - tensorboard - - azureml-tensorboard diff --git a/how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.yml b/how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.yml deleted file mode 100644 index b1c52d07..00000000 --- a/how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.yml +++ /dev/null @@ -1,9 +0,0 @@ -name: particle -dependencies: -- pip: - - azureml-sdk - - azureml-contrib-reinforcementlearning - - azureml-widgets - - tensorboard - - azureml-tensorboard - - ipython diff --git a/how-to-use-azureml/responsible-ai/README.md b/how-to-use-azureml/responsible-ai/README.md new file mode 100644 index 00000000..3ea735bd --- /dev/null +++ b/how-to-use-azureml/responsible-ai/README.md @@ -0,0 +1,17 @@ +# AzureML Responsible AI + +AzureML Responsible AI empowers data scientists and developers to innovate responsibly with a growing set of tools including model interpretability and fairness. + +Follow these sample notebooks to learn about the model interpretability and fairness integration in Azure: + + + +# Responsible AI Sample Notebooks + +- **Visualize fairness metrics and model explanations** + - Dataset: [UCI Adult](https://archive.ics.uci.edu/ml/datasets/Adult) + - **[Jupyter Notebook](visualize-upload-loan-decision/rai-loan-decision.ipynb)** + - Train a model to predict annual income + - Generate fairness and interpretability explanations for the trained model + - Visualize the explanations in the notebook widget dashboard + - Upload the explanations to Azure to be viewed in AzureML studio diff --git a/how-to-use-azureml/responsible-ai/visualize-upload-loan-decision/rai-loan-decision.ipynb b/how-to-use-azureml/responsible-ai/visualize-upload-loan-decision/rai-loan-decision.ipynb new file mode 100644 index 00000000..a154615f --- /dev/null +++ b/how-to-use-azureml/responsible-ai/visualize-upload-loan-decision/rai-loan-decision.ipynb @@ -0,0 +1,720 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/responsible-ai/visualize-upload-loan-decision/rai-loan-decision.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Assess Fairness, Explore Interpretability, and Mitigate Fairness Issues \n", + "\n", + "This notebook demonstrates how to use [InterpretML](interpret.ml), [Fairlearn](fairlearn.org), and the [Responsible AI Widget's](https://github.com/microsoft/responsible-ai-widgets/) Fairness and Interpretability dashboards to understand a model trained on the Census dataset. This dataset is a classification problem - given a range of data about 32,000 individuals, predict whether their annual income is above or below fifty thousand dollars per year.\n", + "\n", + "For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan.\n", + "\n", + "We will first train a fairness-unaware predictor, load its global and local explanations, and use the interpretability and fairness dashboards to demonstrate how this model leads to unfair decisions (under a specific notion of fairness called *demographic parity*). We then mitigate unfairness by applying the `GridSearch` algorithm from `Fairlearn` package.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Install required packages" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install --upgrade fairlearn\n", + "%pip install --upgrade interpret-community\n", + "%pip install --upgrade raiwidgets" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After installing packages, you must close and reopen the notebook as well as restarting the kernel." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load and preprocess the dataset\n", + "\n", + "For simplicity, we import the dataset from the `shap` package, which contains the data in a cleaned format. We start by importing the various modules we're going to use:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from fairlearn.reductions import GridSearch\n", + "from fairlearn.reductions import DemographicParity, ErrorRate\n", + "from fairlearn.datasets import fetch_adult\n", + "from fairlearn.metrics import MetricFrame, selection_rate\n", + "\n", + "from sklearn import svm, neighbors, tree\n", + "from sklearn.compose import ColumnTransformer, make_column_selector\n", + "from sklearn.preprocessing import LabelEncoder,StandardScaler\n", + "from sklearn.linear_model import LogisticRegression\n", + "from sklearn.pipeline import Pipeline\n", + "from sklearn.impute import SimpleImputer\n", + "from sklearn.preprocessing import StandardScaler, OneHotEncoder\n", + "from sklearn.svm import SVC\n", + "from sklearn.metrics import accuracy_score\n", + "\n", + "import pandas as pd\n", + "import numpy as np\n", + "\n", + "# SHAP Tabular Explainer\n", + "from interpret.ext.blackbox import KernelExplainer\n", + "from interpret.ext.blackbox import MimicExplainer\n", + "from interpret.ext.glassbox import LGBMExplainableModel" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now load and inspect the data:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dataset = fetch_adult(as_frame=True)\n", + "X_raw, y = dataset['data'], dataset['target']\n", + "X_raw[\"race\"].value_counts().to_dict()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data. We then perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "sensitive_features = X_raw[['sex','race']]\n", + "\n", + "le = LabelEncoder()\n", + "y = le.fit_transform(y)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, we split the data into training and test sets:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.model_selection import train_test_split\n", + "X_train, X_test, y_train, y_test, sensitive_features_train, sensitive_features_test = \\\n", + " train_test_split(X_raw, y, sensitive_features,\n", + " test_size = 0.2, random_state=0, stratify=y)\n", + "\n", + "# Work around indexing bug\n", + "X_train = X_train.reset_index(drop=True)\n", + "sensitive_features_train = sensitive_features_train.reset_index(drop=True)\n", + "X_test = X_test.reset_index(drop=True)\n", + "sensitive_features_test = sensitive_features_test.reset_index(drop=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Training a fairness-unaware predictor\n", + "\n", + "To show the effect of `Fairlearn` we will first train a standard ML predictor that does not incorporate fairness. For speed of demonstration, we use a simple logistic regression estimator from `sklearn`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "numeric_transformer = Pipeline(\n", + " steps=[\n", + " (\"impute\", SimpleImputer()),\n", + " (\"scaler\", StandardScaler()),\n", + " ]\n", + ")\n", + "categorical_transformer = Pipeline(\n", + " [\n", + " (\"impute\", SimpleImputer(strategy=\"most_frequent\")),\n", + " (\"ohe\", OneHotEncoder(handle_unknown=\"ignore\")),\n", + " ]\n", + ")\n", + "preprocessor = ColumnTransformer(\n", + " transformers=[\n", + " (\"num\", numeric_transformer, make_column_selector(dtype_exclude=\"category\")),\n", + " (\"cat\", categorical_transformer, make_column_selector(dtype_include=\"category\")),\n", + " ]\n", + ")\n", + "\n", + "model = Pipeline(\n", + " steps=[\n", + " (\"preprocessor\", preprocessor),\n", + " (\n", + " \"classifier\",\n", + " LogisticRegression(solver=\"liblinear\", fit_intercept=True),\n", + " ),\n", + " ]\n", + ")\n", + "\n", + "model.fit(X_train, y_train)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Generate model explanations" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Using SHAP KernelExplainer\n", + "# clf.steps[-1][1] returns the trained classification model\n", + "explainer = MimicExplainer(model.steps[-1][1], \n", + " X_train,\n", + " LGBMExplainableModel,\n", + " features=X_raw.columns, \n", + " classes=['Rejected', 'Approved'],\n", + " transformations=preprocessor)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Generate global explanations\n", + "Explain overall model predictions (global explanation)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Explain the model based on a subset of 1000 rows\n", + "global_explanation = explainer.explain_global(X_test[:1000])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "global_explanation.get_feature_importance_dict()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Generate local explanations\n", + "Explain local data points (individual instances)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# You can pass a specific data point or a group of data points to the explain_local function\n", + "# E.g., Explain the first data point in the test set\n", + "instance_num = 1\n", + "local_explanation = explainer.explain_local(X_test[:instance_num])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Get the prediction for the first member of the test set and explain why model made that prediction\n", + "prediction_value = model.predict(X_test)[instance_num]\n", + "\n", + "sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]\n", + "sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print('local importance values: {}'.format(sorted_local_importance_values))\n", + "print('local importance names: {}'.format(sorted_local_importance_names))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Visualize model explanations\n", + "Load the interpretability visualization dashboard" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from raiwidgets import ExplanationDashboard" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ExplanationDashboard(global_explanation, model, dataset=X_test[:1000], true_y=y_test[:1000])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can load this predictor into the Fairness dashboard, and examine how it is unfair:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Assess model fairness \n", + "Load the fairness visualization dashboard" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from raiwidgets import FairnessDashboard\n", + "\n", + "y_pred = model.predict(X_test)\n", + "\n", + "FairnessDashboard(sensitive_features=sensitive_features_test,\n", + " y_true=y_test,\n", + " y_pred=y_pred)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Looking at the disparity in accuracy, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.\n", + "\n", + "Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mitigation with Fairlearn (GridSearch)\n", + "\n", + "The `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.\n", + "\n", + "For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (are approved for a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). We are using this metric for the sake of simplicity; in general, the appropriate fairness metric will not be obvious." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Fairlearn is not yet fully compatible with Pipelines, so we have to pass the estimator only\n", + "X_train_prep = preprocessor.transform(X_train).toarray()\n", + "X_test_prep = preprocessor.transform(X_test).toarray()\n", + "\n", + "sweep = GridSearch(LogisticRegression(solver=\"liblinear\", fit_intercept=True),\n", + " constraints=DemographicParity(),\n", + " grid_size=70)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Our algorithms provide `fit()` and `predict()` methods, so they behave in a similar manner to other ML packages in Python. We do however have to specify two extra arguments to `fit()` - the column of protected attribute labels, and also the number of predictors to generate in our sweep.\n", + "\n", + "After `fit()` completes, we extract the full set of predictors from the `GridSearch` object." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "sweep.fit(X_train_prep, y_train,\n", + " sensitive_features=sensitive_features_train.sex)\n", + "\n", + "predictors = sweep.predictors_" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the sensitive feature). In general, one might not want to do this, since there may be other considerations beyond the strict optimization of error and disparity (of the given protected attribute)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "accuracies, disparities = [], []\n", + "\n", + "for predictor in predictors:\n", + " accuracy_metric_frame = MetricFrame(accuracy_score, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)\n", + " selection_rate_metric_frame = MetricFrame(selection_rate, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)\n", + " accuracies.append(accuracy_metric_frame.overall)\n", + " disparities.append(selection_rate_metric_frame.difference())\n", + " \n", + "all_results = pd.DataFrame({\"predictor\": predictors, \"accuracy\": accuracies, \"disparity\": disparities})\n", + "\n", + "all_models_dict = {\"unmitigated\": model.steps[-1][1]}\n", + "dominant_models_dict = {\"unmitigated\": model.steps[-1][1]}\n", + "base_name_format = \"grid_{0}\"\n", + "row_id = 0\n", + "for row in all_results.itertuples():\n", + " model_name = base_name_format.format(row_id)\n", + " all_models_dict[model_name] = row.predictor\n", + " accuracy_for_lower_or_eq_disparity = all_results[\"accuracy\"][all_results[\"disparity\"] <= row.disparity]\n", + " if row.accuracy >= accuracy_for_lower_or_eq_disparity.max():\n", + " dominant_models_dict[model_name] = row.predictor\n", + " row_id = row_id + 1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can construct predictions for all the models, and also for the dominant models:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from raiwidgets import FairnessDashboard\n", + "\n", + "dashboard_all = {}\n", + "for name, predictor in all_models_dict.items():\n", + " value = predictor.predict(X_test_prep)\n", + " dashboard_all[name] = value\n", + " \n", + "dominant_all = {}\n", + "for name, predictor in dominant_models_dict.items():\n", + " dominant_all[name] = predictor.predict(X_test_prep)\n", + "\n", + "FairnessDashboard(sensitive_features=sensitive_features_test, \n", + " y_true=y_test,\n", + " y_pred=dominant_all)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can look at just the dominant models in the dashboard:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute \"sex\"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy.\n", + "\n", + "By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# AzureML integration\n", + "\n", + "We will now go through a brief example of the AzureML integration.\n", + "\n", + "The required package can be installed via:\n", + "\n", + "```\n", + "pip install azureml-contrib-fairness\n", + "pip install azureml-interpret\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Connect to workspace\n", + "\n", + "Just like in the previous tutorials, we will need to connect to a [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class)?view=azure-ml-py).\n", + "\n", + "The following code will allow you to create a workspace if you don't already have one created. You must have an Azure subscription to create a workspace:\n", + "\n", + "```python\n", + "from azureml.core import Workspace\n", + "ws = Workspace.create(name='myworkspace',\n", + " subscription_id='',\n", + " resource_group='myresourcegroup',\n", + " create_resource_group=True,\n", + " location='eastus2')\n", + "```\n", + "\n", + "**If you are running this on a Notebook VM, you can import the existing workspace.**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Workspace\n", + "\n", + "ws = Workspace.from_config()\n", + "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Registering models\n", + "\n", + "The fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `:` pairs) of registered models in the workspace." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, we register each of the models in the `dashboard_predicted` dictionary into the workspace. For this, we have to save each model to a file, and then register that file:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import joblib\n", + "import os\n", + "from azureml.core import Model, Experiment, Run\n", + "\n", + "os.makedirs('models', exist_ok=True)\n", + "def register_model(name, model):\n", + " print(\"Registering \", name)\n", + " model_path = \"models/{0}.pkl\".format(name)\n", + " joblib.dump(value=model, filename=model_path)\n", + " registered_model = Model.register(model_path=model_path,\n", + " model_name=name,\n", + " workspace=ws)\n", + " print(\"Registered \", registered_model.id)\n", + " return registered_model.id\n", + "\n", + "model_name_id_mapping = dict()\n", + "for name, model in dashboard_all.items():\n", + " m_id = register_model(name, model)\n", + " model_name_id_mapping[name] = m_id" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, produce new predictions dictionaries, with the updated names:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dashboard_all_ids = dict()\n", + "for name, y_pred in dashboard_all.items():\n", + " dashboard_all_ids[model_name_id_mapping[name]] = y_pred" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Uploading a dashboard\n", + "\n", + "We create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `dashboard_registered` dictionary we just created:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "sf = { 'sex': sensitive_features_test.sex, 'race': sensitive_features_test.race }\n", + "\n", + "from fairlearn.metrics._group_metric_set import _create_group_metric_set\n", + "\n", + "dash_dict_all = _create_group_metric_set(y_true=y_test,\n", + " predictions=dashboard_all_ids,\n", + " sensitive_features=sf,\n", + " prediction_type='binary_classification')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, we import our `contrib` package which contains the routine to perform the upload:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can create an Experiment, then a Run, and upload our dashboard to it:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "exp = Experiment(ws, 'responsible-ai-loan-decision')\n", + "print(exp)\n", + "\n", + "run = exp.start_logging()\n", + "try:\n", + " dashboard_title = \"Upload MultiAsset from Grid Search with Census Data Notebook\"\n", + " upload_id = upload_dashboard_dictionary(run,\n", + " dash_dict_all,\n", + " dashboard_name=dashboard_title)\n", + " print(\"\\nUploaded to id: {0}\\n\".format(upload_id))\n", + "\n", + " downloaded_dict = download_dashboard_by_upload_id(run, upload_id)\n", + "finally:\n", + " run.complete()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Uploading explanations\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.interpret import ExplanationClient\n", + "\n", + "client = ExplanationClient.from_run(run)\n", + "client.upload_model_explanation(global_explanation, comment = \"census data global explanation\")" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "chgrego" + } + ], + "kernelspec": { + "display_name": "Python 3.6", + "language": "python", + "name": "python36" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} \ No newline at end of file diff --git a/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.yml b/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.yml deleted file mode 100644 index 42144437..00000000 --- a/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.yml +++ /dev/null @@ -1,8 +0,0 @@ -name: logging-api -dependencies: -- numpy -- matplotlib -- tqdm -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.yml b/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.yml deleted file mode 100644 index 34a95ec8..00000000 --- a/how-to-use-azureml/track-and-monitor-experiments/manage-runs/manage-runs.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: manage-runs -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.yml b/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.yml deleted file mode 100644 index 17aa9f1d..00000000 --- a/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.yml +++ /dev/null @@ -1,10 +0,0 @@ -name: export-run-history-to-tensorboard -dependencies: -- pip: - - azureml-sdk - - azureml-tensorboard - - tensorflow - - tqdm - - scipy - - sklearn - - setuptools>=41.0.0 diff --git a/how-to-use-azureml/track-and-monitor-experiments/tensorboard/tensorboard/tensorboard.yml b/how-to-use-azureml/track-and-monitor-experiments/tensorboard/tensorboard/tensorboard.yml deleted file mode 100644 index 024d3600..00000000 --- a/how-to-use-azureml/track-and-monitor-experiments/tensorboard/tensorboard/tensorboard.yml +++ /dev/null @@ -1,7 +0,0 @@ -name: tensorboard -dependencies: -- pip: - - azureml-sdk - - azureml-tensorboard - - tensorflow - - setuptools>=41.0.0 diff --git a/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-local/train-local.yml b/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-local/train-local.yml deleted file mode 100644 index 5095b89f..00000000 --- a/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-local/train-local.yml +++ /dev/null @@ -1,7 +0,0 @@ -name: train-local -dependencies: -- scikit-learn -- matplotlib -- pip: - - azureml-sdk - - azureml-mlflow diff --git a/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-remote/train-remote.yml b/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-remote/train-remote.yml deleted file mode 100644 index e96f6ab6..00000000 --- a/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-remote/train-remote.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: train-remote -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.yml b/how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.yml deleted file mode 100644 index 57cc15b6..00000000 --- a/how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: train-on-amlcompute -dependencies: -- scikit-learn -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/training/train-on-local/train-on-local.yml b/how-to-use-azureml/training/train-on-local/train-on-local.yml deleted file mode 100644 index 76f64467..00000000 --- a/how-to-use-azureml/training/train-on-local/train-on-local.yml +++ /dev/null @@ -1,7 +0,0 @@ -name: train-on-local -dependencies: -- matplotlib -- scikit-learn -- pip: - - azureml-sdk - - azureml-widgets diff --git a/how-to-use-azureml/training/using-environments/using-environments.yml b/how-to-use-azureml/training/using-environments/using-environments.yml deleted file mode 100644 index 88422a40..00000000 --- a/how-to-use-azureml/training/using-environments/using-environments.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: using-environments -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-tutorial.yml b/how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-tutorial.yml deleted file mode 100644 index 6633d9e5..00000000 --- a/how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-tutorial.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: datadrift-tutorial -dependencies: -- pip: - - azureml-sdk - - azureml-datadrift diff --git a/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.yml b/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.yml deleted file mode 100644 index f33e9474..00000000 --- a/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.yml +++ /dev/null @@ -1,6 +0,0 @@ -name: pipeline-for-image-classification -dependencies: -- pip: - - azureml-sdk - - pandas<=0.23.4 - - fuse diff --git a/how-to-use-azureml/work-with-data/datasets-tutorial/scriptrun-with-data-input-output/how-to-use-scriptrun.yml b/how-to-use-azureml/work-with-data/datasets-tutorial/scriptrun-with-data-input-output/how-to-use-scriptrun.yml deleted file mode 100644 index 87dc3b4c..00000000 --- a/how-to-use-azureml/work-with-data/datasets-tutorial/scriptrun-with-data-input-output/how-to-use-scriptrun.yml +++ /dev/null @@ -1,4 +0,0 @@ -name: how-to-use-scriptrun -dependencies: -- pip: - - azureml-sdk diff --git a/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.yml b/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.yml deleted file mode 100644 index af9acab3..00000000 --- a/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: tabular-timeseries-dataset-filtering -dependencies: -- pip: - - azureml-sdk - - pandas<=0.23.4 diff --git a/how-to-use-azureml/work-with-data/datasets-tutorial/train-with-datasets/train-with-datasets.yml b/how-to-use-azureml/work-with-data/datasets-tutorial/train-with-datasets/train-with-datasets.yml deleted file mode 100644 index d13f92dc..00000000 --- a/how-to-use-azureml/work-with-data/datasets-tutorial/train-with-datasets/train-with-datasets.yml +++ /dev/null @@ -1,8 +0,0 @@ -name: train-with-datasets -dependencies: -- pip: - - azureml-sdk - - azureml-widgets - - pandas<=0.23.4 - - fuse - - scikit-learn diff --git a/index.md b/index.md index 5fa6209e..5082517f 100644 --- a/index.md +++ b/index.md @@ -128,6 +128,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an | [cartpole_sc](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb) | | | | | | | | [minecraft](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/minecraft-on-distributed-compute/minecraft.ipynb) | | | | | | | | [particle](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/reinforcement-learning/multiagent-particle-envs/particle.ipynb) | | | | | | | +| [rai-loan-decision](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/responsible-ai/visualize-upload-loan-decision/rai-loan-decision.ipynb) | | | | | | | | [Logging APIs](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb) | Logging APIs and analyzing results | None | None | None | None | None | | [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master//setup-environment/configuration.ipynb) | | | | | | | | [tutorial-1st-experiment-sdk-train](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | | | | | | |