mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 01:27:06 -05:00
Compare commits
43 Commits
azureml-sd
...
azureml-sd
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a05444845b | ||
|
|
79c9f50c15 | ||
|
|
67e10e0f6b | ||
|
|
1ef0331a0f | ||
|
|
5e91c836b9 | ||
|
|
ddbb3c45f6 | ||
|
|
8eed4e39d0 | ||
|
|
b37c0297db | ||
|
|
968cc798d0 | ||
|
|
5c9ca452fb | ||
|
|
5e82680272 | ||
|
|
41841fc8c0 | ||
|
|
896bf63736 | ||
|
|
d4751bf6ec | ||
|
|
3531fe8a21 | ||
|
|
db6ae67940 | ||
|
|
2a479bb01e | ||
|
|
d05eec92af | ||
|
|
70fdab0a28 | ||
|
|
7ce5a43b58 | ||
|
|
d2a9dbb582 | ||
|
|
a5d774683d | ||
|
|
0e850f0917 | ||
|
|
59f34b7179 | ||
|
|
2a3cb69004 | ||
|
|
42894ff81a | ||
|
|
2163cab50b | ||
|
|
255edb04c0 | ||
|
|
cfce079278 | ||
|
|
ae6f067c81 | ||
|
|
1b7ff724f3 | ||
|
|
8bba850db1 | ||
|
|
b9e35ea0cb | ||
|
|
ffa28aa89c | ||
|
|
6ab85a20e3 | ||
|
|
486c44d157 | ||
|
|
cd80040dd8 | ||
|
|
465a5b13b1 | ||
|
|
dcd2d58880 | ||
|
|
93bf4393f2 | ||
|
|
d6ebb484a6 | ||
|
|
35afd43193 | ||
|
|
22597ac684 |
29
Dockerfiles/1.0.41/Dockerfile
Normal file
29
Dockerfiles/1.0.41/Dockerfile
Normal file
@@ -0,0 +1,29 @@
|
||||
FROM continuumio/miniconda:4.5.11
|
||||
|
||||
# install git
|
||||
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
|
||||
|
||||
# create a new conda environment named azureml
|
||||
RUN conda create -n azureml -y -q Python=3.6
|
||||
|
||||
# install additional packages used by sample notebooks. this is optional
|
||||
RUN ["/bin/bash", "-c", "source activate azureml && conda install -y tqdm cython matplotlib scikit-learn"]
|
||||
|
||||
# install azurmel-sdk components
|
||||
RUN ["/bin/bash", "-c", "source activate azureml && pip install azureml-sdk[notebooks]==1.0.41"]
|
||||
|
||||
# clone Azure ML GitHub sample notebooks
|
||||
RUN cd /home && git clone -b "azureml-sdk-1.0.41" --single-branch https://github.com/Azure/MachineLearningNotebooks.git
|
||||
|
||||
# generate jupyter configuration file
|
||||
RUN ["/bin/bash", "-c", "source activate azureml && mkdir ~/.jupyter && cd ~/.jupyter && jupyter notebook --generate-config"]
|
||||
|
||||
# set an emtpy token for Jupyter to remove authentication.
|
||||
# this is NOT recommended for production environment
|
||||
RUN echo "c.NotebookApp.token = ''" >> ~/.jupyter/jupyter_notebook_config.py
|
||||
|
||||
# open up port 8887 on the container
|
||||
EXPOSE 8887
|
||||
|
||||
# start Jupyter notebook server on port 8887 when the container starts
|
||||
CMD /bin/bash -c "cd /home/MachineLearningNotebooks && source activate azureml && jupyter notebook --port 8887 --no-browser --ip 0.0.0.0 --allow-root"
|
||||
@@ -1,3 +1,4 @@
|
||||
|
||||
This software is made available to you on the condition that you agree to
|
||||
[your agreement][1] governing your use of Azure.
|
||||
If you do not have an existing agreement governing your use of Azure, you agree that
|
||||
11
README.md
11
README.md
@@ -52,7 +52,18 @@ The [How to use Azure ML](./how-to-use-azureml) folder contains specific example
|
||||
|
||||
Visit following repos to see projects contributed by Azure ML users:
|
||||
|
||||
- [AMLSamples](https://github.com/Azure/AMLSamples) Number of end-to-end examples, including face recognition, predictive maintenance, customer churn and sentiment analysis.
|
||||
- [Fine tune natural language processing models using Azure Machine Learning service](https://github.com/Microsoft/AzureML-BERT)
|
||||
- [Fashion MNIST with Azure ML SDK](https://github.com/amynic/azureml-sdk-fashion)
|
||||
|
||||
## Data/Telemetry
|
||||
This repository collects usage data and sends it to Mircosoft to help improve our products and services. Read Microsoft's [privacy statement to learn more](https://privacy.microsoft.com/en-US/privacystatement)
|
||||
|
||||
To opt out of tracking, please go to the raw markdown or .ipynb files and remove the following line of code:
|
||||
|
||||
```sh
|
||||
""
|
||||
```
|
||||
This URL will be slightly different depending on the file.
|
||||
|
||||

|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -95,7 +102,7 @@
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"This notebook was created using version 1.0.39 of the Azure ML SDK\")\n",
|
||||
"print(\"This notebook was created using version 1.0.43 of the Azure ML SDK\")\n",
|
||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Learn how to use Azure Machine Learning services for experimentation and model management.
|
||||
|
||||
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration Notebook](../configuration.ipynb) first if you haven't already to establish your connection to the AzureML Workspace. Then, run the notebooks in following recommended order.
|
||||
As a pre-requisite, run the [configuration Notebook](../configuration.ipynb) notebook first to set up your Azure ML Workspace. Then, run the notebooks in following recommended order.
|
||||
|
||||
* [train-within-notebook](./training/train-within-notebook): Train a model hile tracking run history, and learn how to deploy the model as web service to Azure Container Instance.
|
||||
* [train-on-local](./training/train-on-local): Learn how to submit a run to local computer and use Azure ML managed run configuration.
|
||||
@@ -15,6 +15,3 @@ If you are using an Azure Machine Learning Notebook VM, you are all set. Otherw
|
||||
* [enable-app-insights-in-production-service](./deployment/enable-app-insights-in-production-service) Learn how to use App Insights with production web service.
|
||||
|
||||
Find quickstarts, end-to-end tutorials, and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
|
||||
|
||||
|
||||

|
||||
|
||||
@@ -10,7 +10,7 @@ dependencies:
|
||||
- urllib3<1.24
|
||||
- scipy>=1.0.0,<=1.1.0
|
||||
- scikit-learn>=0.19.0,<=0.20.3
|
||||
- pandas>=0.22.0,<0.23.0
|
||||
- pandas>=0.22.0,<=0.23.4
|
||||
- py-xgboost<=0.80
|
||||
|
||||
- pip:
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
name: azure_automl
|
||||
dependencies:
|
||||
# The python interpreter version.
|
||||
# Currently Azure ML only supports 3.5.2 and later.
|
||||
- nomkl
|
||||
- python>=3.5.2,<3.6.8
|
||||
- nb_conda
|
||||
- matplotlib==2.1.0
|
||||
- numpy>=1.11.0,<=1.16.2
|
||||
- cython
|
||||
- urllib3<1.24
|
||||
- scipy>=1.0.0,<=1.1.0
|
||||
- scikit-learn>=0.19.0,<=0.20.3
|
||||
- pandas>=0.22.0,<0.23.0
|
||||
- py-xgboost<=0.80
|
||||
|
||||
- pip:
|
||||
# Required packages for AzureML execution, history, and data preparation.
|
||||
- azureml-sdk[automl,explain]
|
||||
- azureml-widgets
|
||||
- pandas_ml
|
||||
|
||||
@@ -12,7 +12,7 @@ fi
|
||||
|
||||
if [ "$AUTOML_ENV_FILE" == "" ]
|
||||
then
|
||||
AUTOML_ENV_FILE="automl_env.yml"
|
||||
AUTOML_ENV_FILE="automl_env_mac.yml"
|
||||
fi
|
||||
|
||||
if [ ! -f $AUTOML_ENV_FILE ]; then
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -66,11 +73,12 @@
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"from sklearn import datasets\n",
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core.experiment import Experiment\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"from azureml.train.automl import AutoMLConfig"
|
||||
"from azureml.train.automl import AutoMLConfig, constants"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -106,7 +114,7 @@
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"\n",
|
||||
"This uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method."
|
||||
"This uses scikit-learn's [load_iris](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html) method."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -115,11 +123,17 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"digits = datasets.load_digits()\n",
|
||||
"iris = datasets.load_iris()\n",
|
||||
"X_train, X_test, y_train, y_test = train_test_split(iris.data, \n",
|
||||
" iris.target, \n",
|
||||
" test_size=0.2, \n",
|
||||
" random_state=0)\n",
|
||||
"\n",
|
||||
"# Exclude the first 100 rows from training so that they can be used for test.\n",
|
||||
"X_train = digits.data[100:,:]\n",
|
||||
"y_train = digits.target[100:]"
|
||||
"# Convert the X_train and X_test to pandas DataFrame and set column names,\n",
|
||||
"# This is needed for initializing the input variable names of ONNX model, \n",
|
||||
"# and the prediction with the ONNX model using the inference helper.\n",
|
||||
"X_train = pd.DataFrame(X_train, columns=['c1', 'c2', 'c3', 'c4'])\n",
|
||||
"X_test = pd.DataFrame(X_test, columns=['c1', 'c2', 'c3', 'c4'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -155,9 +169,10 @@
|
||||
" primary_metric = 'AUC_weighted',\n",
|
||||
" iteration_timeout_minutes = 60,\n",
|
||||
" iterations = 10,\n",
|
||||
" verbosity = logging.INFO,\n",
|
||||
" verbosity = logging.INFO, \n",
|
||||
" X = X_train, \n",
|
||||
" y = y_train,\n",
|
||||
" preprocess=True,\n",
|
||||
" enable_onnx_compatible_models=True,\n",
|
||||
" path = project_folder)"
|
||||
]
|
||||
@@ -253,6 +268,65 @@
|
||||
"onnx_fl_path = \"./best_model.onnx\"\n",
|
||||
"OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Predict with the ONNX model, using onnxruntime package"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import sys\n",
|
||||
"import json\n",
|
||||
"from azureml.automl.core.onnx_convert import OnnxConvertConstants\n",
|
||||
"\n",
|
||||
"if sys.version_info < OnnxConvertConstants.OnnxIncompatiblePythonVersion:\n",
|
||||
" python_version_compatible = True\n",
|
||||
"else:\n",
|
||||
" python_version_compatible = False\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" import onnxruntime\n",
|
||||
" from azureml.automl.core.onnx_convert import OnnxInferenceHelper \n",
|
||||
" onnxrt_present = True\n",
|
||||
"except ImportError:\n",
|
||||
" onnxrt_present = False\n",
|
||||
"\n",
|
||||
"def get_onnx_res(run):\n",
|
||||
" res_path = '_debug_y_trans_converter.json'\n",
|
||||
" run.download_file(name=constants.MODEL_RESOURCE_PATH_ONNX, output_file_path=res_path)\n",
|
||||
" with open(res_path) as f:\n",
|
||||
" onnx_res = json.load(f)\n",
|
||||
" return onnx_res\n",
|
||||
"\n",
|
||||
"if onnxrt_present and python_version_compatible: \n",
|
||||
" mdl_bytes = onnx_mdl.SerializeToString()\n",
|
||||
" onnx_res = get_onnx_res(best_run)\n",
|
||||
"\n",
|
||||
" onnxrt_helper = OnnxInferenceHelper(mdl_bytes, onnx_res)\n",
|
||||
" pred_onnx, pred_prob_onnx = onnxrt_helper.predict(X_test)\n",
|
||||
"\n",
|
||||
" print(pred_onnx)\n",
|
||||
" print(pred_prob_onnx)\n",
|
||||
"else:\n",
|
||||
" if not python_version_compatible:\n",
|
||||
" print('Please use Python version 3.6 to run the inference helper.') \n",
|
||||
" if not onnxrt_present:\n",
|
||||
" print('Please install the onnxruntime package to do the prediction with ONNX model.')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -185,7 +192,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create or Attach a Remote Linux DSVM"
|
||||
"### Create or Attach an AmlCompute cluster"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -194,21 +201,36 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"dsvm_name = 'mydsvmb'\n",
|
||||
"from azureml.core.compute import AmlCompute\n",
|
||||
"from azureml.core.compute import ComputeTarget\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" while ws.compute_targets[dsvm_name].provisioning_state == 'Creating':\n",
|
||||
" time.sleep(1)\n",
|
||||
" \n",
|
||||
" dsvm_compute = DsvmCompute(ws, dsvm_name)\n",
|
||||
" print('Found existing DVSM.')\n",
|
||||
"except:\n",
|
||||
" print('Creating a new DSVM.')\n",
|
||||
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
|
||||
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
|
||||
" dsvm_compute.wait_for_completion(show_output = True)\n",
|
||||
" print(\"Waiting one minute for ssh to be accessible\")\n",
|
||||
" time.sleep(90) # Wait for ssh to be accessible"
|
||||
"# Choose a name for your cluster.\n",
|
||||
"amlcompute_cluster_name = \"cpucluster\"\n",
|
||||
"\n",
|
||||
"found = False\n",
|
||||
"\n",
|
||||
"# Check if this compute target already exists in the workspace.\n",
|
||||
"\n",
|
||||
"cts = ws.compute_targets\n",
|
||||
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
|
||||
" found = True\n",
|
||||
" print('Found existing compute target.')\n",
|
||||
" compute_target = cts[amlcompute_cluster_name]\n",
|
||||
"\n",
|
||||
"if not found:\n",
|
||||
" print('Creating a new compute target...')\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
|
||||
" #vm_priority = 'lowpriority', # optional\n",
|
||||
" max_nodes = 6)\n",
|
||||
"\n",
|
||||
" # Create the cluster.\\n\",\n",
|
||||
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
|
||||
"\n",
|
||||
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
|
||||
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
|
||||
" compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
|
||||
"\n",
|
||||
" # For a more detailed view of current AmlCompute status, use get_status()."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -220,9 +242,13 @@
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"conda_run_config.target = dsvm_compute\n",
|
||||
"# Set compute target to AmlCompute\n",
|
||||
"conda_run_config.target = compute_target\n",
|
||||
"conda_run_config.environment.docker.enabled = True\n",
|
||||
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80'])\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = cd"
|
||||
@@ -287,6 +313,27 @@
|
||||
"remote_run.clean_preprocessor_cache()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Cancelling Runs\n",
|
||||
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
|
||||
"# remote_run.cancel()\n",
|
||||
"\n",
|
||||
"# Cancel iteration 1 and move onto iteration 2.\n",
|
||||
"# remote_run.cancel_iteration(1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -112,7 +119,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create or Attach existing AmlCompute\n",
|
||||
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you attach the default `AmlCompute` as your training compute resource.\n",
|
||||
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you create an AmlCompute as your training compute resource.\n",
|
||||
"\n",
|
||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||
]
|
||||
@@ -123,16 +130,38 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"compute_target = ws.get_default_compute_target(\"CPU\")"
|
||||
"from azureml.core.compute import AmlCompute\n",
|
||||
"from azureml.core.compute import ComputeTarget\n",
|
||||
"\n",
|
||||
"# Choose a name for your cluster.\n",
|
||||
"amlcompute_cluster_name = \"cpucluster\"\n",
|
||||
"\n",
|
||||
"found = False\n",
|
||||
"\n",
|
||||
"# Check if this compute target already exists in the workspace.\n",
|
||||
"\n",
|
||||
"cts = ws.compute_targets\n",
|
||||
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
|
||||
" found = True\n",
|
||||
" print('Found existing compute target.')\n",
|
||||
" compute_target = cts[amlcompute_cluster_name]\n",
|
||||
"\n",
|
||||
"if not found:\n",
|
||||
" print('Creating a new compute target...')\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
|
||||
" #vm_priority = 'lowpriority', # optional\n",
|
||||
" max_nodes = 6)\n",
|
||||
"\n",
|
||||
" # Create the cluster.\\n\",\n",
|
||||
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
|
||||
"\n",
|
||||
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
|
||||
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
|
||||
" compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
|
||||
"\n",
|
||||
" # For a more detailed view of current AmlCompute status, use get_status()."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -1,558 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Automated Machine Learning\n",
|
||||
"_**Remote Execution using attach**_\n",
|
||||
"\n",
|
||||
"## Contents\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Setup](#Setup)\n",
|
||||
"1. [Data](#Data)\n",
|
||||
"1. [Train](#Train)\n",
|
||||
"1. [Results](#Results)\n",
|
||||
"1. [Test](#Test)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Introduction\n",
|
||||
"In this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML to handle text data with remote attach.\n",
|
||||
"\n",
|
||||
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||
"\n",
|
||||
"In this notebook you will learn how to:\n",
|
||||
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
||||
"2. Attach an existing DSVM to a workspace.\n",
|
||||
"3. Configure AutoML using `AutoMLConfig`.\n",
|
||||
"4. Train the model using the DSVM.\n",
|
||||
"5. Explore the results.\n",
|
||||
"6. Viewing the engineered names for featurized data and featurization summary for all raw features.\n",
|
||||
"7. Test the best fitted model.\n",
|
||||
"\n",
|
||||
"In addition this notebook showcases the following features\n",
|
||||
"- **Parallel** executions for iterations\n",
|
||||
"- **Asynchronous** tracking of progress\n",
|
||||
"- **Cancellation** of individual iterations or the entire run\n",
|
||||
"- Retrieving models for any iteration or logged metric\n",
|
||||
"- Specifying AutoML settings as `**kwargs`\n",
|
||||
"- Handling **text** data using the `preprocess` flag"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core.experiment import Experiment\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"from azureml.train.automl import AutoMLConfig"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# Choose a name for the run history container in the workspace.\n",
|
||||
"experiment_name = 'automl-remote-attach'\n",
|
||||
"project_folder = './sample_projects/automl-remote-attach'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
"output = {}\n",
|
||||
"output['SDK version'] = azureml.core.VERSION\n",
|
||||
"output['Subscription ID'] = ws.subscription_id\n",
|
||||
"output['Workspace'] = ws.name\n",
|
||||
"output['Resource Group'] = ws.resource_group\n",
|
||||
"output['Location'] = ws.location\n",
|
||||
"output['Project Directory'] = project_folder\n",
|
||||
"output['Experiment Name'] = experiment.name\n",
|
||||
"pd.set_option('display.max_colwidth', -1)\n",
|
||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Attach a Remote Linux DSVM\n",
|
||||
"To use a remote Docker compute target:\n",
|
||||
"1. Create a Linux DSVM in Azure, following these [instructions](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.\n",
|
||||
"2. Enter the IP address, user name and password below.\n",
|
||||
"\n",
|
||||
"**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/detailed-troubleshoot-ssh-connection) on changing SSH ports for security reasons."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, RemoteCompute\n",
|
||||
"import time\n",
|
||||
"\n",
|
||||
"# Add your VM information below\n",
|
||||
"# If a compute with the specified compute_name already exists, it will be used and the dsvm_ip_addr, dsvm_ssh_port, \n",
|
||||
"# dsvm_username and dsvm_password will be ignored.\n",
|
||||
"compute_name = 'mydsvmb'\n",
|
||||
"dsvm_ip_addr = '<<ip_addr>>'\n",
|
||||
"dsvm_ssh_port = 22\n",
|
||||
"dsvm_username = '<<username>>'\n",
|
||||
"dsvm_password = '<<password>>'\n",
|
||||
"\n",
|
||||
"if compute_name in ws.compute_targets:\n",
|
||||
" print('Using existing compute.')\n",
|
||||
" dsvm_compute = ws.compute_targets[compute_name]\n",
|
||||
"else:\n",
|
||||
" attach_config = RemoteCompute.attach_configuration(address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)\n",
|
||||
" ComputeTarget.attach(workspace=ws, name=compute_name, attach_configuration=attach_config)\n",
|
||||
"\n",
|
||||
" while ws.compute_targets[compute_name].provisioning_state == 'Creating':\n",
|
||||
" time.sleep(1)\n",
|
||||
"\n",
|
||||
" dsvm_compute = ws.compute_targets[compute_name]\n",
|
||||
" \n",
|
||||
" if dsvm_compute.provisioning_state == 'Failed':\n",
|
||||
" print('Attached failed.')\n",
|
||||
" print(dsvm_compute.provisioning_errors)\n",
|
||||
" dsvm_compute.detach()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"import pkg_resources\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to the Linux DSVM\n",
|
||||
"conda_run_config.target = dsvm_compute\n",
|
||||
"\n",
|
||||
"pandas_dependency = 'pandas==' + pkg_resources.get_distribution(\"pandas\").version\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80',pandas_dependency])\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"For remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
|
||||
"In this example, the `get_data()` function returns a [dictionary](README.md#getdata)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not os.path.exists(project_folder):\n",
|
||||
" os.makedirs(project_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile $project_folder/get_data.py\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"from sklearn.datasets import fetch_20newsgroups\n",
|
||||
"\n",
|
||||
"def get_data():\n",
|
||||
" remove = ('headers', 'footers', 'quotes')\n",
|
||||
" categories = [\n",
|
||||
" 'alt.atheism',\n",
|
||||
" 'talk.religion.misc',\n",
|
||||
" 'comp.graphics',\n",
|
||||
" 'sci.space',\n",
|
||||
" ]\n",
|
||||
" data_train = fetch_20newsgroups(subset = 'train', categories = categories,\n",
|
||||
" shuffle = True, random_state = 42,\n",
|
||||
" remove = remove)\n",
|
||||
" \n",
|
||||
" X_train = np.array(data_train.data).reshape((len(data_train.data),1))\n",
|
||||
" y_train = np.array(data_train.target)\n",
|
||||
" \n",
|
||||
" return { \"X\" : X_train, \"y\" : y_train }"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train\n",
|
||||
"\n",
|
||||
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
|
||||
"\n",
|
||||
"**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.\n",
|
||||
"\n",
|
||||
"|Property|Description|\n",
|
||||
"|-|-|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||
"|**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.|\n",
|
||||
"|**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.|\n",
|
||||
"|**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.\n",
|
||||
"|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.<br>Default is *1*; you can set it to *-1* to use all cores.|"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"automl_settings = {\n",
|
||||
" \"iteration_timeout_minutes\": 60,\n",
|
||||
" \"iterations\": 4,\n",
|
||||
" \"n_cross_validations\": 5,\n",
|
||||
" \"primary_metric\": 'AUC_weighted',\n",
|
||||
" \"preprocess\": True,\n",
|
||||
" \"max_cores_per_iteration\": 2\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||
" path = project_folder,\n",
|
||||
" run_configuration=conda_run_config,\n",
|
||||
" data_script = project_folder + \"/get_data.py\",\n",
|
||||
" **automl_settings\n",
|
||||
" )\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run = experiment.submit(automl_config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Results\n",
|
||||
"#### Widget for Monitoring Runs\n",
|
||||
"\n",
|
||||
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
|
||||
"\n",
|
||||
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`\n",
|
||||
"\n",
|
||||
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(remote_run).show() "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Wait until the run finishes.\n",
|
||||
"remote_run.wait_for_completion(show_output = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Pre-process cache cleanup\n",
|
||||
"The preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run.clean_preprocessor_cache()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"#### Retrieve All Child Runs\n",
|
||||
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"children = list(remote_run.get_children())\n",
|
||||
"metricslist = {}\n",
|
||||
"for run in children:\n",
|
||||
" properties = run.get_properties()\n",
|
||||
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
|
||||
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||
"\n",
|
||||
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
||||
"rundata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Cancelling Runs\n",
|
||||
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
|
||||
"# remote_run.cancel()\n",
|
||||
"\n",
|
||||
"# Cancel iteration 1 and move onto iteration 2.\n",
|
||||
"# remote_run.cancel_iteration(1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Retrieve the Best Model\n",
|
||||
"\n",
|
||||
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"best_run, fitted_model = remote_run.get_output()\n",
|
||||
"print(best_run)\n",
|
||||
"print(fitted_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### View the engineered names for featurized data\n",
|
||||
"Below we display the engineered feature names generated for the featurized data using the preprocessing featurization."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"fitted_model.named_steps['datatransformer'].get_engineered_feature_names()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### View the featurization summary\n",
|
||||
"Below we display the featurization that was performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:-\n",
|
||||
"- Raw feature name\n",
|
||||
"- Number of engineered features formed out of this raw feature\n",
|
||||
"- Type detected\n",
|
||||
"- If feature was dropped\n",
|
||||
"- List of feature transformations for the raw feature"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"fitted_model.named_steps['datatransformer'].get_featurization_summary()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Best Model Based on Any Other Metric\n",
|
||||
"Show the run and the model which has the smallest `accuracy` value:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# lookup_metric = \"accuracy\"\n",
|
||||
"# best_run, fitted_model = remote_run.get_output(metric = lookup_metric)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Model from a Specific Iteration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"iteration = 0\n",
|
||||
"zero_run, zero_model = remote_run.get_output(iteration = iteration)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load test data.\n",
|
||||
"from pandas_ml import ConfusionMatrix\n",
|
||||
"from sklearn.datasets import fetch_20newsgroups\n",
|
||||
"\n",
|
||||
"remove = ('headers', 'footers', 'quotes')\n",
|
||||
"categories = [\n",
|
||||
" 'alt.atheism',\n",
|
||||
" 'talk.religion.misc',\n",
|
||||
" 'comp.graphics',\n",
|
||||
" 'sci.space',\n",
|
||||
" ]\n",
|
||||
"\n",
|
||||
"data_test = fetch_20newsgroups(subset = 'test', categories = categories,\n",
|
||||
" shuffle = True, random_state = 42,\n",
|
||||
" remove = remove)\n",
|
||||
"\n",
|
||||
"X_test = np.array(data_test.data).reshape((len(data_test.data),1))\n",
|
||||
"y_test = data_test.target\n",
|
||||
"\n",
|
||||
"# Test our best pipeline.\n",
|
||||
"\n",
|
||||
"y_pred = fitted_model.predict(X_test)\n",
|
||||
"y_pred_strings = [data_test.target_names[i] for i in y_pred]\n",
|
||||
"y_test_strings = [data_test.target_names[i] for i in y_test]\n",
|
||||
"\n",
|
||||
"cm = ConfusionMatrix(y_test_strings, y_pred_strings)\n",
|
||||
"print(cm)\n",
|
||||
"cm.plot()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "savitam"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,555 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Automated Machine Learning\n",
|
||||
"_**Remote Execution using AmlCompute**_\n",
|
||||
"\n",
|
||||
"## Contents\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Setup](#Setup)\n",
|
||||
"1. [Data](#Data)\n",
|
||||
"1. [Train](#Train)\n",
|
||||
"1. [Results](#Results)\n",
|
||||
"1. [Test](#Test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Introduction\n",
|
||||
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
|
||||
"\n",
|
||||
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||
"\n",
|
||||
"In this notebook you would see\n",
|
||||
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
||||
"2. Create or Attach existing AmlCompute to a workspace.\n",
|
||||
"3. Configure AutoML using `AutoMLConfig`.\n",
|
||||
"4. Train the model using AmlCompute\n",
|
||||
"5. Explore the results.\n",
|
||||
"6. Test the best fitted model.\n",
|
||||
"\n",
|
||||
"In addition this notebook showcases the following features\n",
|
||||
"- **Parallel** executions for iterations\n",
|
||||
"- **Asynchronous** tracking of progress\n",
|
||||
"- **Cancellation** of individual iterations or the entire run\n",
|
||||
"- Retrieving models for any iteration or logged metric\n",
|
||||
"- Specifying AutoML settings as `**kwargs`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import logging\n",
|
||||
"import os\n",
|
||||
"import csv\n",
|
||||
"\n",
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"from sklearn import datasets\n",
|
||||
"\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core.experiment import Experiment\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"from azureml.train.automl import AutoMLConfig"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# Choose a name for the run history container in the workspace.\n",
|
||||
"experiment_name = 'automl-remote-amlcompute'\n",
|
||||
"project_folder = './project'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
"output = {}\n",
|
||||
"output['SDK version'] = azureml.core.VERSION\n",
|
||||
"output['Subscription ID'] = ws.subscription_id\n",
|
||||
"output['Workspace Name'] = ws.name\n",
|
||||
"output['Resource Group'] = ws.resource_group\n",
|
||||
"output['Location'] = ws.location\n",
|
||||
"output['Project Directory'] = project_folder\n",
|
||||
"output['Experiment Name'] = experiment.name\n",
|
||||
"pd.set_option('display.max_colwidth', -1)\n",
|
||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create or Attach existing AmlCompute\n",
|
||||
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you create `AmlCompute` as your training compute resource.\n",
|
||||
"\n",
|
||||
"**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n",
|
||||
"\n",
|
||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import AmlCompute\n",
|
||||
"from azureml.core.compute import ComputeTarget\n",
|
||||
"\n",
|
||||
"# Choose a name for your cluster.\n",
|
||||
"amlcompute_cluster_name = \"automlcl\"\n",
|
||||
"\n",
|
||||
"found = False\n",
|
||||
"# Check if this compute target already exists in the workspace.\n",
|
||||
"cts = ws.compute_targets\n",
|
||||
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n",
|
||||
" found = True\n",
|
||||
" print('Found existing compute target.')\n",
|
||||
" compute_target = cts[amlcompute_cluster_name]\n",
|
||||
" \n",
|
||||
"if not found:\n",
|
||||
" print('Creating a new compute target...')\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n",
|
||||
" #vm_priority = 'lowpriority', # optional\n",
|
||||
" max_nodes = 6)\n",
|
||||
"\n",
|
||||
" # Create the cluster.\n",
|
||||
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
|
||||
" \n",
|
||||
" # Can poll for a minimum number of nodes and for a specific timeout.\n",
|
||||
" # If no min_node_count is provided, it will use the scale settings for the cluster.\n",
|
||||
" compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
|
||||
" \n",
|
||||
" # For a more detailed view of current AmlCompute status, use get_status()."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"For remote executions, you need to make the data accessible from the remote compute.\n",
|
||||
"This can be done by uploading the data to DataStore.\n",
|
||||
"In this example, we upload scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data_train = datasets.load_digits()\n",
|
||||
"\n",
|
||||
"if not os.path.isdir('data'):\n",
|
||||
" os.mkdir('data')\n",
|
||||
" \n",
|
||||
"if not os.path.exists(project_folder):\n",
|
||||
" os.makedirs(project_folder)\n",
|
||||
" \n",
|
||||
"pd.DataFrame(data_train.data).to_csv(\"data/X_train.tsv\", index=False, header=False, quoting=csv.QUOTE_ALL, sep=\"\\t\")\n",
|
||||
"pd.DataFrame(data_train.target).to_csv(\"data/y_train.tsv\", index=False, header=False, sep=\"\\t\")\n",
|
||||
"\n",
|
||||
"ds = ws.get_default_datastore()\n",
|
||||
"ds.upload(src_dir='./data', target_path='bai_data', overwrite=True, show_progress=True)\n",
|
||||
"\n",
|
||||
"from azureml.core.runconfig import DataReferenceConfiguration\n",
|
||||
"dr = DataReferenceConfiguration(datastore_name=ds.name, \n",
|
||||
" path_on_datastore='bai_data', \n",
|
||||
" path_on_compute='/tmp/azureml_runs',\n",
|
||||
" mode='download', # download files from datastore to compute target\n",
|
||||
" overwrite=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to AmlCompute\n",
|
||||
"conda_run_config.target = compute_target\n",
|
||||
"conda_run_config.environment.docker.enabled = True\n",
|
||||
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"# set the data reference of the run coonfiguration\n",
|
||||
"conda_run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile $project_folder/get_data.py\n",
|
||||
"\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"def get_data():\n",
|
||||
" X_train = pd.read_csv(\"/tmp/azureml_runs/bai_data/X_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
" y_train = pd.read_csv(\"/tmp/azureml_runs/bai_data/y_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
"\n",
|
||||
" return { \"X\" : X_train.values, \"y\" : y_train[0].values }\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train\n",
|
||||
"\n",
|
||||
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
|
||||
"\n",
|
||||
"**Note:** When using AmlCompute, you can't pass Numpy arrays directly to the fit method.\n",
|
||||
"\n",
|
||||
"|Property|Description|\n",
|
||||
"|-|-|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||
"|**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.|"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"automl_settings = {\n",
|
||||
" \"iteration_timeout_minutes\": 2,\n",
|
||||
" \"iterations\": 20,\n",
|
||||
" \"n_cross_validations\": 5,\n",
|
||||
" \"primary_metric\": 'AUC_weighted',\n",
|
||||
" \"preprocess\": False,\n",
|
||||
" \"max_concurrent_iterations\": 5,\n",
|
||||
" \"verbosity\": logging.INFO\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||
" debug_log = 'automl_errors.log',\n",
|
||||
" path = project_folder,\n",
|
||||
" run_configuration=conda_run_config,\n",
|
||||
" data_script = project_folder + \"/get_data.py\",\n",
|
||||
" **automl_settings\n",
|
||||
" )\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.\n",
|
||||
"In this example, we specify `show_output = False` to suppress console output while the run is in progress."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run = experiment.submit(automl_config, show_output = False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Results\n",
|
||||
"\n",
|
||||
"#### Loading executed runs\n",
|
||||
"In case you need to load a previously executed run, enable the cell below and replace the `run_id` value."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"remote_run = AutoMLRun(experiment = experiment, run_id = 'AutoML_5db13491-c92a-4f1d-b622-8ab8d973a058')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Widget for Monitoring Runs\n",
|
||||
"\n",
|
||||
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
|
||||
"\n",
|
||||
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`\n",
|
||||
"\n",
|
||||
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(remote_run).show() "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Wait until the run finishes.\n",
|
||||
"remote_run.wait_for_completion(show_output = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"#### Retrieve All Child Runs\n",
|
||||
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"children = list(remote_run.get_children())\n",
|
||||
"metricslist = {}\n",
|
||||
"for run in children:\n",
|
||||
" properties = run.get_properties()\n",
|
||||
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}\n",
|
||||
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||
"\n",
|
||||
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
||||
"rundata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Cancelling Runs\n",
|
||||
"\n",
|
||||
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
|
||||
"# remote_run.cancel()\n",
|
||||
"\n",
|
||||
"# Cancel iteration 1 and move onto iteration 2.\n",
|
||||
"# remote_run.cancel_iteration(1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Retrieve the Best Model\n",
|
||||
"\n",
|
||||
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"best_run, fitted_model = remote_run.get_output()\n",
|
||||
"print(best_run)\n",
|
||||
"print(fitted_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Best Model Based on Any Other Metric\n",
|
||||
"Show the run and the model which has the smallest `log_loss` value:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"lookup_metric = \"log_loss\"\n",
|
||||
"best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n",
|
||||
"print(best_run)\n",
|
||||
"print(fitted_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Model from a Specific Iteration\n",
|
||||
"Show the run and the model from the third iteration:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"iteration = 3\n",
|
||||
"third_run, third_model = remote_run.get_output(iteration=iteration)\n",
|
||||
"print(third_run)\n",
|
||||
"print(third_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test\n",
|
||||
"\n",
|
||||
"#### Load Test Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"digits = datasets.load_digits()\n",
|
||||
"X_test = digits.data[:10, :]\n",
|
||||
"y_test = digits.target[:10]\n",
|
||||
"images = digits.images[:10]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Testing Our Best Fitted Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Randomly select digits and test.\n",
|
||||
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
|
||||
" print(index)\n",
|
||||
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
|
||||
" label = y_test[index]\n",
|
||||
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
|
||||
" fig = plt.figure(1, figsize=(3,3))\n",
|
||||
" ax1 = fig.add_axes((0,0,.8,.8))\n",
|
||||
" ax1.set_title(title)\n",
|
||||
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
|
||||
" plt.show()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "savitam"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,586 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Automated Machine Learning\n",
|
||||
"_**Remote Execution with DataStore**_\n",
|
||||
"\n",
|
||||
"## Contents\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Setup](#Setup)\n",
|
||||
"1. [Data](#Data)\n",
|
||||
"1. [Train](#Train)\n",
|
||||
"1. [Results](#Results)\n",
|
||||
"1. [Test](#Test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Introduction\n",
|
||||
"This sample accesses a data file on a remote DSVM through DataStore. Advantages of using data store are:\n",
|
||||
"1. DataStore secures the access details.\n",
|
||||
"2. DataStore supports read, write to blob and file store\n",
|
||||
"3. AutoML natively supports copying data from DataStore to DSVM\n",
|
||||
"\n",
|
||||
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||
"\n",
|
||||
"In this notebook you would see\n",
|
||||
"1. Storing data in DataStore.\n",
|
||||
"2. get_data returning data from DataStore."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"As part of the setup you have already created a <b>Workspace</b>. For AutoML you would need to create an <b>Experiment</b>. An <b>Experiment</b> is a named object in a <b>Workspace</b>, which is used to run experiments."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import logging\n",
|
||||
"import os\n",
|
||||
"import time\n",
|
||||
"\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core.compute import DsvmCompute\n",
|
||||
"from azureml.core.experiment import Experiment\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"from azureml.train.automl import AutoMLConfig"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# choose a name for experiment\n",
|
||||
"experiment_name = 'automl-remote-datastore-file'\n",
|
||||
"# project folder\n",
|
||||
"project_folder = './sample_projects/automl-remote-datastore-file'\n",
|
||||
"\n",
|
||||
"experiment=Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
"output = {}\n",
|
||||
"output['SDK version'] = azureml.core.VERSION\n",
|
||||
"output['Subscription ID'] = ws.subscription_id\n",
|
||||
"output['Workspace'] = ws.name\n",
|
||||
"output['Resource Group'] = ws.resource_group\n",
|
||||
"output['Location'] = ws.location\n",
|
||||
"output['Project Directory'] = project_folder\n",
|
||||
"output['Experiment Name'] = experiment.name\n",
|
||||
"pd.set_option('display.max_colwidth', -1)\n",
|
||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a Remote Linux DSVM\n",
|
||||
"Note: If creation fails with a message about Marketplace purchase eligibilty, go to portal.azure.com, start creating DSVM there, and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled it, you can exit without actually creating VM.\n",
|
||||
"\n",
|
||||
"**Note**: By default SSH runs on port 22 and you don't need to specify it. But if for security reasons you can switch to a different port (such as 5022), you can append the port number to the address. [Read more](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/detailed-troubleshoot-ssh-connection) on this."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"compute_target_name = 'mydsvmc'\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" while ws.compute_targets[compute_target_name].provisioning_state == 'Creating':\n",
|
||||
" time.sleep(1)\n",
|
||||
" \n",
|
||||
" dsvm_compute = DsvmCompute(workspace=ws, name=compute_target_name)\n",
|
||||
" print('found existing:', dsvm_compute.name)\n",
|
||||
"except:\n",
|
||||
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size=\"Standard_D2_v2\")\n",
|
||||
" dsvm_compute = DsvmCompute.create(ws, name=compute_target_name, provisioning_configuration=dsvm_config)\n",
|
||||
" dsvm_compute.wait_for_completion(show_output=True)\n",
|
||||
" print(\"Waiting one minute for ssh to be accessible\")\n",
|
||||
" time.sleep(90) # Wait for ssh to be accessible"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"\n",
|
||||
"### Copy data file to local\n",
|
||||
"\n",
|
||||
"Download the data file.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not os.path.isdir('data'):\n",
|
||||
" os.mkdir('data') "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn.datasets import fetch_20newsgroups\n",
|
||||
"import csv\n",
|
||||
"\n",
|
||||
"remove = ('headers', 'footers', 'quotes')\n",
|
||||
"categories = [\n",
|
||||
" 'alt.atheism',\n",
|
||||
" 'talk.religion.misc',\n",
|
||||
" 'comp.graphics',\n",
|
||||
" 'sci.space',\n",
|
||||
" ]\n",
|
||||
"data_train = fetch_20newsgroups(subset = 'train', categories = categories,\n",
|
||||
" shuffle = True, random_state = 42,\n",
|
||||
" remove = remove)\n",
|
||||
" \n",
|
||||
"pd.DataFrame(data_train.data).to_csv(\"data/X_train.tsv\", index=False, header=False, quoting=csv.QUOTE_ALL, sep=\"\\t\")\n",
|
||||
"pd.DataFrame(data_train.target).to_csv(\"data/y_train.tsv\", index=False, header=False, sep=\"\\t\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Upload data to the cloud"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now make the data accessible remotely by uploading that data from your local machine into Azure so it can be accessed for remote training. The datastore is a convenient construct associated with your workspace for you to upload/download data, and interact with it from your remote compute targets. It is backed by Azure blob storage account.\n",
|
||||
"\n",
|
||||
"The data.tsv files are uploaded into a directory named data at the root of the datastore."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#blob_datastore = Datastore(ws, blob_datastore_name)\n",
|
||||
"ds = ws.get_default_datastore()\n",
|
||||
"print(ds.datastore_type, ds.account_name, ds.container_name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# ds.upload_files(\"data.tsv\")\n",
|
||||
"ds.upload(src_dir='./data', target_path='data', overwrite=True, show_progress=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Configure & Run\n",
|
||||
"\n",
|
||||
"First let's create a DataReferenceConfigruation object to inform the system what data folder to download to the compute target.\n",
|
||||
"The path_on_compute should be an absolute path to ensure that the data files are downloaded only once. The get_data method should use this same path to access the data files."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import DataReferenceConfiguration\n",
|
||||
"dr = DataReferenceConfiguration(datastore_name=ds.name, \n",
|
||||
" path_on_datastore='data', \n",
|
||||
" path_on_compute='/tmp/azureml_runs',\n",
|
||||
" mode='download', # download files from datastore to compute target\n",
|
||||
" overwrite=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"import pkg_resources\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to the Linux DSVM\n",
|
||||
"conda_run_config.target = dsvm_compute\n",
|
||||
"# set the data reference of the run coonfiguration\n",
|
||||
"conda_run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"pandas_dependency = 'pandas==' + pkg_resources.get_distribution(\"pandas\").version\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80',pandas_dependency])\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create Get Data File\n",
|
||||
"For remote executions you should author a get_data.py file containing a get_data() function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.\n",
|
||||
"\n",
|
||||
"The *get_data()* function returns a [dictionary](README.md#getdata).\n",
|
||||
"\n",
|
||||
"The read_csv uses the path_on_compute value specified in the DataReferenceConfiguration call plus the path_on_datastore folder and then the actual file name."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"if not os.path.exists(project_folder):\n",
|
||||
" os.makedirs(project_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile $project_folder/get_data.py\n",
|
||||
"\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"def get_data():\n",
|
||||
" X_train = pd.read_csv(\"/tmp/azureml_runs/data/X_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
" y_train = pd.read_csv(\"/tmp/azureml_runs/data/y_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
"\n",
|
||||
" return { \"X\" : X_train.values, \"y\" : y_train[0].values }"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train\n",
|
||||
"\n",
|
||||
"You can specify automl_settings as **kwargs** as well. Also note that you can use the get_data() symantic for local excutions too. \n",
|
||||
"\n",
|
||||
"<i>Note: For Remote DSVM and Batch AI you cannot pass Numpy arrays directly to AutoMLConfig.</i>\n",
|
||||
"\n",
|
||||
"|Property|Description|\n",
|
||||
"|-|-|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration|\n",
|
||||
"|**iterations**|Number of iterations. In each iteration Auto ML trains a specific pipeline with the data|\n",
|
||||
"|**n_cross_validations**|Number of cross validation splits|\n",
|
||||
"|**max_concurrent_iterations**|Max number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM\n",
|
||||
"|**preprocess**| *True/False* <br>Setting this to *True* enables Auto ML to perform preprocessing <br>on the input to handle *missing data*, and perform some common *feature extraction*|\n",
|
||||
"|**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|\n",
|
||||
"|**max_cores_per_iteration**| Indicates how many cores on the compute target would be used to train a single pipeline.<br> Default is *1*, you can set it to *-1* to use all cores|"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"automl_settings = {\n",
|
||||
" \"iteration_timeout_minutes\": 60,\n",
|
||||
" \"iterations\": 4,\n",
|
||||
" \"n_cross_validations\": 5,\n",
|
||||
" \"primary_metric\": 'AUC_weighted',\n",
|
||||
" \"preprocess\": True,\n",
|
||||
" \"max_cores_per_iteration\": 1,\n",
|
||||
" \"verbosity\": logging.INFO\n",
|
||||
"}\n",
|
||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||
" debug_log = 'automl_errors.log',\n",
|
||||
" path=project_folder,\n",
|
||||
" run_configuration=conda_run_config,\n",
|
||||
" #compute_target = dsvm_compute,\n",
|
||||
" data_script = project_folder + \"/get_data.py\",\n",
|
||||
" **automl_settings\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets/models even when the experiment is running to retreive the best model up to that point. Once you are satisfied with the model you can cancel a particular iteration or the whole run."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run = experiment.submit(automl_config, show_output=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Results\n",
|
||||
"#### Widget for monitoring runs\n",
|
||||
"\n",
|
||||
"The widget will sit on \"loading\" until the first iteration completed, then you will see an auto-updating graph and table show up. It refreshed once per minute, so you should see the graph update as child runs complete.\n",
|
||||
"\n",
|
||||
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under /tmp/azureml_run/{iterationid}/azureml-logs\n",
|
||||
"\n",
|
||||
"NOTE: The widget displays a link at the bottom. This links to a web-ui to explore the individual run details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(remote_run).show() "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Wait until the run finishes.\n",
|
||||
"remote_run.wait_for_completion(show_output = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"#### Retrieve All Child Runs\n",
|
||||
"You can also use sdk methods to fetch all the child runs and see individual metrics that we log. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"children = list(remote_run.get_children())\n",
|
||||
"metricslist = {}\n",
|
||||
"for run in children:\n",
|
||||
" properties = run.get_properties()\n",
|
||||
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
|
||||
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||
"\n",
|
||||
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
||||
"rundata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Canceling Runs\n",
|
||||
"You can cancel ongoing remote runs using the *cancel()* and *cancel_iteration()* functions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Cancel the ongoing experiment and stop scheduling new iterations\n",
|
||||
"# remote_run.cancel()\n",
|
||||
"\n",
|
||||
"# Cancel iteration 1 and move onto iteration 2\n",
|
||||
"# remote_run.cancel_iteration(1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Pre-process cache cleanup\n",
|
||||
"The preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run.clean_preprocessor_cache()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Retrieve the Best Model\n",
|
||||
"\n",
|
||||
"Below we select the best pipeline from our iterations. The *get_output* method returns the best run and the fitted model. There are overloads on *get_output* that allow you to retrieve the best run and fitted model for *any* logged metric or a particular *iteration*."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"best_run, fitted_model = remote_run.get_output()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Best Model based on any other metric"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# lookup_metric = \"accuracy\"\n",
|
||||
"# best_run, fitted_model = remote_run.get_output(metric=lookup_metric)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Model from a specific iteration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# iteration = 1\n",
|
||||
"# best_run, fitted_model = remote_run.get_output(iteration=iteration)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load test data.\n",
|
||||
"from pandas_ml import ConfusionMatrix\n",
|
||||
"\n",
|
||||
"data_test = fetch_20newsgroups(subset = 'test', categories = categories,\n",
|
||||
" shuffle = True, random_state = 42,\n",
|
||||
" remove = remove)\n",
|
||||
"\n",
|
||||
"X_test = np.array(data_test.data).reshape((len(data_test.data),1))\n",
|
||||
"y_test = data_test.target\n",
|
||||
"\n",
|
||||
"# Test our best pipeline.\n",
|
||||
"\n",
|
||||
"y_pred = fitted_model.predict(X_test)\n",
|
||||
"y_pred_strings = [data_test.target_names[i] for i in y_pred]\n",
|
||||
"y_test_strings = [data_test.target_names[i] for i in y_test]\n",
|
||||
"\n",
|
||||
"cm = ConfusionMatrix(y_test_strings, y_pred_strings)\n",
|
||||
"print(cm)\n",
|
||||
"cm.plot()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "savitam"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,527 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Automated Machine Learning\n",
|
||||
"_**Remote Execution using DSVM (Ubuntu)**_\n",
|
||||
"\n",
|
||||
"## Contents\n",
|
||||
"1. [Introduction](#Introduction)\n",
|
||||
"1. [Setup](#Setup)\n",
|
||||
"1. [Data](#Data)\n",
|
||||
"1. [Train](#Train)\n",
|
||||
"1. [Results](#Results)\n",
|
||||
"1. [Test](#Test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Introduction\n",
|
||||
"In this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.\n",
|
||||
"\n",
|
||||
"Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n",
|
||||
"\n",
|
||||
"In this notebook you wiil learn how to:\n",
|
||||
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
||||
"2. Attach an existing DSVM to a workspace.\n",
|
||||
"3. Configure AutoML using `AutoMLConfig`.\n",
|
||||
"4. Train the model using the DSVM.\n",
|
||||
"5. Explore the results.\n",
|
||||
"6. Test the best fitted model.\n",
|
||||
"\n",
|
||||
"In addition, this notebook showcases the following features:\n",
|
||||
"- **Parallel** executions for iterations\n",
|
||||
"- **Asynchronous** tracking of progress\n",
|
||||
"- **Cancellation** of individual iterations or the entire run\n",
|
||||
"- Retrieving models for any iteration or logged metric\n",
|
||||
"- Specifying AutoML settings as `**kwargs`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import logging\n",
|
||||
"import os\n",
|
||||
"import time\n",
|
||||
"import csv\n",
|
||||
"\n",
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"from sklearn import datasets\n",
|
||||
"\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core.experiment import Experiment\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"from azureml.train.automl import AutoMLConfig"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"# Choose a name for the run history container in the workspace.\n",
|
||||
"experiment_name = 'automl-remote-dsvm'\n",
|
||||
"project_folder = './project'\n",
|
||||
"\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"\n",
|
||||
"output = {}\n",
|
||||
"output['SDK version'] = azureml.core.VERSION\n",
|
||||
"output['Subscription ID'] = ws.subscription_id\n",
|
||||
"output['Workspace Name'] = ws.name\n",
|
||||
"output['Resource Group'] = ws.resource_group\n",
|
||||
"output['Location'] = ws.location\n",
|
||||
"output['Project Directory'] = project_folder\n",
|
||||
"output['Experiment Name'] = experiment.name\n",
|
||||
"pd.set_option('display.max_colwidth', -1)\n",
|
||||
"outputDf = pd.DataFrame(data = output, index = [''])\n",
|
||||
"outputDf.T"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a Remote Linux DSVM\n",
|
||||
"**Note:** If creation fails with a message about Marketplace purchase eligibilty, start creation of a DSVM through the [Azure portal](https://portal.azure.com), and select \"Want to create programmatically\" to enable programmatic creation. Once you've enabled this setting, you can exit the portal without actually creating the DSVM, and creation of the DSVM through the notebook should work.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import DsvmCompute\n",
|
||||
"\n",
|
||||
"dsvm_name = 'mydsvma'\n",
|
||||
"try:\n",
|
||||
" dsvm_compute = DsvmCompute(ws, dsvm_name)\n",
|
||||
" print('Found an existing DSVM.')\n",
|
||||
"except:\n",
|
||||
" print('Creating a new DSVM.')\n",
|
||||
" dsvm_config = DsvmCompute.provisioning_configuration(vm_size = \"Standard_D2_v2\")\n",
|
||||
" dsvm_compute = DsvmCompute.create(ws, name = dsvm_name, provisioning_configuration = dsvm_config)\n",
|
||||
" dsvm_compute.wait_for_completion(show_output = True)\n",
|
||||
" print(\"Waiting one minute for ssh to be accessible\")\n",
|
||||
" time.sleep(90) # Wait for ssh to be accessible"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data\n",
|
||||
"For remote executions, you need to make the data accessible from the remote compute.\n",
|
||||
"This can be done by uploading the data to DataStore.\n",
|
||||
"In this example, we upload scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data_train = datasets.load_digits()\n",
|
||||
"\n",
|
||||
"if not os.path.isdir('data'):\n",
|
||||
" os.mkdir('data')\n",
|
||||
" \n",
|
||||
"if not os.path.exists(project_folder):\n",
|
||||
" os.makedirs(project_folder)\n",
|
||||
" \n",
|
||||
"pd.DataFrame(data_train.data).to_csv(\"data/X_train.tsv\", index=False, header=False, quoting=csv.QUOTE_ALL, sep=\"\\t\")\n",
|
||||
"pd.DataFrame(data_train.target).to_csv(\"data/y_train.tsv\", index=False, header=False, sep=\"\\t\")\n",
|
||||
"\n",
|
||||
"ds = ws.get_default_datastore()\n",
|
||||
"ds.upload(src_dir='./data', target_path='re_data', overwrite=True, show_progress=True)\n",
|
||||
"\n",
|
||||
"from azureml.core.runconfig import DataReferenceConfiguration\n",
|
||||
"dr = DataReferenceConfiguration(datastore_name=ds.name, \n",
|
||||
" path_on_datastore='re_data', \n",
|
||||
" path_on_compute='/tmp/azureml_runs',\n",
|
||||
" mode='download', # download files from datastore to compute target\n",
|
||||
" overwrite=False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to the Linux DSVM\n",
|
||||
"conda_run_config.target = dsvm_compute\n",
|
||||
"\n",
|
||||
"# set the data reference of the run coonfiguration\n",
|
||||
"conda_run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80'])\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile $project_folder/get_data.py\n",
|
||||
"\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"def get_data():\n",
|
||||
" X_train = pd.read_csv(\"/tmp/azureml_runs/re_data/X_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
" y_train = pd.read_csv(\"/tmp/azureml_runs/re_data/y_train.tsv\", delimiter=\"\\t\", header=None, quotechar='\"')\n",
|
||||
"\n",
|
||||
" return { \"X\" : X_train.values, \"y\" : y_train[0].values }\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train\n",
|
||||
"\n",
|
||||
"You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.\n",
|
||||
"\n",
|
||||
"**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.\n",
|
||||
"\n",
|
||||
"|Property|Description|\n",
|
||||
"|-|-|\n",
|
||||
"|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|\n",
|
||||
"|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n",
|
||||
"|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|\n",
|
||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||
"|**max_concurrent_iterations**|Maximum number of iterations to execute in parallel. This should be less than the number of cores on the DSVM.|"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"automl_settings = {\n",
|
||||
" \"iteration_timeout_minutes\": 10,\n",
|
||||
" \"iterations\": 20,\n",
|
||||
" \"n_cross_validations\": 5,\n",
|
||||
" \"primary_metric\": 'AUC_weighted',\n",
|
||||
" \"preprocess\": False,\n",
|
||||
" \"max_concurrent_iterations\": 2,\n",
|
||||
" \"verbosity\": logging.INFO\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"automl_config = AutoMLConfig(task = 'classification',\n",
|
||||
" debug_log = 'automl_errors.log',\n",
|
||||
" path = project_folder, \n",
|
||||
" run_configuration=conda_run_config,\n",
|
||||
" data_script = project_folder + \"/get_data.py\",\n",
|
||||
" **automl_settings\n",
|
||||
" )\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Note:** The first run on a new DSVM may take several minutes to prepare the environment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.\n",
|
||||
"\n",
|
||||
"In this example, we specify `show_output = False` to suppress console output while the run is in progress."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run = experiment.submit(automl_config, show_output = False)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"remote_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Results\n",
|
||||
"\n",
|
||||
"#### Loading Executed Runs\n",
|
||||
"In case you need to load a previously executed run, enable the cell below and replace the `run_id` value."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"remote_run = AutoMLRun(experiment=experiment, run_id = 'AutoML_480d3ed6-fc94-44aa-8f4e-0b945db9d3ef')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Widget for Monitoring Runs\n",
|
||||
"\n",
|
||||
"The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n",
|
||||
"\n",
|
||||
"You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`\n",
|
||||
"\n",
|
||||
"**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"RunDetails(remote_run).show() "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Wait until the run finishes.\n",
|
||||
"remote_run.wait_for_completion(show_output = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"#### Retrieve All Child Runs\n",
|
||||
"You can also use SDK methods to fetch all the child runs and see individual metrics that we log."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"children = list(remote_run.get_children())\n",
|
||||
"metricslist = {}\n",
|
||||
"for run in children:\n",
|
||||
" properties = run.get_properties()\n",
|
||||
" metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} \n",
|
||||
" metricslist[int(properties['iteration'])] = metrics\n",
|
||||
"\n",
|
||||
"rundata = pd.DataFrame(metricslist).sort_index(1)\n",
|
||||
"rundata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Cancelling Runs\n",
|
||||
"\n",
|
||||
"You can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Cancel the ongoing experiment and stop scheduling new iterations.\n",
|
||||
"# remote_run.cancel()\n",
|
||||
"\n",
|
||||
"# Cancel iteration 1 and move onto iteration 2.\n",
|
||||
"# remote_run.cancel_iteration(1)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Retrieve the Best Model\n",
|
||||
"\n",
|
||||
"Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"best_run, fitted_model = remote_run.get_output()\n",
|
||||
"print(best_run)\n",
|
||||
"print(fitted_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Best Model Based on Any Other Metric\n",
|
||||
"Show the run and the model which has the smallest `log_loss` value:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"lookup_metric = \"log_loss\"\n",
|
||||
"best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n",
|
||||
"print(best_run)\n",
|
||||
"print(fitted_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Model from a Specific Iteration\n",
|
||||
"Show the run and the model from the third iteration:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"iteration = 3\n",
|
||||
"third_run, third_model = remote_run.get_output(iteration = iteration)\n",
|
||||
"print(third_run)\n",
|
||||
"print(third_model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Test\n",
|
||||
"\n",
|
||||
"#### Load Test Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"digits = datasets.load_digits()\n",
|
||||
"X_test = digits.data[:10, :]\n",
|
||||
"y_test = digits.target[:10]\n",
|
||||
"images = digits.images[:10]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Test Our Best Fitted Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Randomly select digits and test.\n",
|
||||
"for index in np.random.choice(len(y_test), 2, replace = False):\n",
|
||||
" print(index)\n",
|
||||
" predicted = fitted_model.predict(X_test[index:index + 1])[0]\n",
|
||||
" label = y_test[index]\n",
|
||||
" title = \"Label value = %d Predicted value = %d \" % (label, predicted)\n",
|
||||
" fig = plt.figure(1, figsize=(3,3))\n",
|
||||
" ax1 = fig.add_axes((0,0,.8,.8))\n",
|
||||
" ax1.set_title(title)\n",
|
||||
" plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')\n",
|
||||
" plt.show()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "savitam"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -26,4 +26,8 @@ You can use Azure Databricks as a compute target from [Azure Machine Learning Pi
|
||||
|
||||
For more on SDK concepts, please refer to [notebooks](https://github.com/Azure/MachineLearningNotebooks).
|
||||
|
||||
**Please let us know your feedback.**
|
||||
**Please let us know your feedback.**
|
||||
|
||||
|
||||
|
||||

|
||||
@@ -11,6 +11,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -333,6 +340,13 @@
|
||||
"source": [
|
||||
"dbutils.notebook.exit(\"success\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -11,6 +11,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -277,6 +284,13 @@
|
||||
"#comment to not delete the web service\n",
|
||||
"myservice.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -11,6 +11,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -203,6 +210,13 @@
|
||||
"#model.delete()\n",
|
||||
"aks_target.delete() "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -11,6 +11,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -139,6 +146,13 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -11,6 +11,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -143,6 +150,13 @@
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -660,6 +660,13 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -796,6 +796,13 @@
|
||||
"source": [
|
||||
"myservice.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -677,6 +677,13 @@
|
||||
"# Next: ADLA as a Compute Target\n",
|
||||
"To use ADLA as a compute target from Azure Machine Learning Pipeline, a AdlaStep is used. This [notebook](./aml-pipelines-use-adla-as-compute-target.ipynb) demonstrates the use of AdlaStep in Azure Machine Learning Pipeline."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
1
how-to-use-azureml/data-drift/readme.md
Normal file
1
how-to-use-azureml/data-drift/readme.md
Normal file
@@ -0,0 +1 @@
|
||||
Under contruction...please visit again soon!
|
||||
@@ -9,6 +9,13 @@
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -255,7 +255,7 @@
|
||||
"source": [
|
||||
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
||||
"\n",
|
||||
"myenv = CondaDependencies.create(pip_packages=[\"numpy\",\"onnxruntime\",\"azureml-core\"])\n",
|
||||
"myenv = CondaDependencies.create(pip_packages=[\"numpy\",\"onnxruntime==0.4.0\",\"azureml-core\"])\n",
|
||||
"\n",
|
||||
"with open(\"myenv.yml\",\"w\") as f:\n",
|
||||
" f.write(myenv.serialize_to_string())"
|
||||
|
||||
@@ -43,5 +43,6 @@ Take a look at [intro-to-pipelines](./intro-to-pipelines/) for the list of noteb
|
||||
|
||||
1. [pipeline-batch-scoring.ipynb](https://aka.ms/pl-batch-score): This notebook demonstrates how to run a batch scoring job using Azure Machine Learning pipelines.
|
||||
2. [pipeline-style-transfer.ipynb](https://aka.ms/pl-style-trans): This notebook demonstrates a multi-step pipeline that uses GPU compute.
|
||||
3. [nyc-taxi-data-regression-model-building.ipynb](https://aka.ms/pl-nyctaxi-tutorial): This notebook is an AzureML Pipelines version of the previously published two part sample.
|
||||
|
||||

|
||||
|
||||
@@ -65,8 +65,6 @@
|
||||
"import os\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core import Workspace, Experiment, Datastore\n",
|
||||
"from azureml.core.compute import AmlCompute\n",
|
||||
"from azureml.core.compute import ComputeTarget\n",
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
@@ -116,36 +114,20 @@
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')\n",
|
||||
"\n",
|
||||
"# Default datastore (Azure file storage)\n",
|
||||
"def_file_store = ws.get_default_datastore() \n",
|
||||
"# The above call is equivalent to Datastore(ws, \"workspacefilestore\") or simply Datastore(ws)\n",
|
||||
"print(\"Default datastore's name: {}\".format(def_file_store.name))\n",
|
||||
"\n",
|
||||
"# Blob storage associated with the workspace\n",
|
||||
"# Default datastore\n",
|
||||
"def_blob_store = ws.get_default_datastore() \n",
|
||||
"# The following call GETS the Azure Blob Store associated with your workspace.\n",
|
||||
"# Note that workspaceblobstore is **the name of this store and CANNOT BE CHANGED and must be used as is** \n",
|
||||
"def_blob_store = Datastore(ws, \"workspaceblobstore\")\n",
|
||||
"print(\"Blobstore's name: {}\".format(def_blob_store.name))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# project folder\n",
|
||||
"project_folder = '.'\n",
|
||||
" \n",
|
||||
"print('Sample projects will be created in {}.'.format(os.path.realpath(project_folder)))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Required data and script files for the the tutorial\n",
|
||||
"Sample files required to finish this tutorial are already copied to the project folder specified above. Even though the .py provided in the samples don't have much \"ML work,\" as a data scientist, you will work on this extensively as part of your work. To complete this tutorial, the contents of these files are not very important. The one-line files are for demostration purpose only."
|
||||
"Sample files required to finish this tutorial are already copied to the corresponding source_directory locations. Even though the .py provided in the samples don't have much \"ML work,\" as a data scientist, you will work on this extensively as part of your work. To complete this tutorial, the contents of these files are not very important. The one-line files are for demostration purpose only."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -176,19 +158,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# get_default_datastore() gets the default Azure File Store associated with your workspace.\n",
|
||||
"# Here we are reusing the def_file_store object we obtained earlier\n",
|
||||
"\n",
|
||||
"# target_path is the directory at the destination\n",
|
||||
"def_file_store.upload_files(['./20news.pkl'], \n",
|
||||
" target_path = '20newsgroups', \n",
|
||||
" overwrite = True, \n",
|
||||
" show_progress = True)\n",
|
||||
"\n",
|
||||
"# Here we are reusing the def_blob_store we created earlier\n",
|
||||
"# get_default_datastore() gets the default Azure Blob Store associated with your workspace.\n",
|
||||
"# Here we are reusing the def_blob_store object we obtained earlier\n",
|
||||
"def_blob_store.upload_files([\"./20news.pkl\"], target_path=\"20newsgroups\", overwrite=True)\n",
|
||||
"\n",
|
||||
"print(\"Upload calls completed\")"
|
||||
"print(\"Upload call completed\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -290,9 +263,10 @@
|
||||
"- [**MpiStep**](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.mpi_step.mpistep?view=azure-ml-py): Adds a step to run a MPI job in a Pipeline.\n",
|
||||
"- [**AutoMLStep**](https://docs.microsoft.com/en-us/python/api/azureml-train-automl/azureml.train.automl.automlstep?view=azure-ml-py): Creates a AutoML step in a Pipeline.\n",
|
||||
"\n",
|
||||
"The following code will create a PythonScriptStep to be executed in the Azure Machine Learning Compute we created above using train.py, one of the files already made available in the project folder.\n",
|
||||
"The following code will create a PythonScriptStep to be executed in the Azure Machine Learning Compute we created above using train.py, one of the files already made available in the `source_directory`.\n",
|
||||
"\n",
|
||||
"A **PythonScriptStep** is a basic, built-in step to run a Python Script on a compute target. It takes a script name and optionally other parameters like arguments for the script, compute target, inputs and outputs. If no compute target is specified, default compute target for the workspace is used. You can also use a [**RunConfiguration**](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.runconfiguration?view=azure-ml-py) to specify requirements for the PythonScriptStep, such as conda dependencies and docker image."
|
||||
"A **PythonScriptStep** is a basic, built-in step to run a Python Script on a compute target. It takes a script name and optionally other parameters like arguments for the script, compute target, inputs and outputs. If no compute target is specified, default compute target for the workspace is used. You can also use a [**RunConfiguration**](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.runconfiguration?view=azure-ml-py) to specify requirements for the PythonScriptStep, such as conda dependencies and docker image.\n",
|
||||
"> The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -303,6 +277,9 @@
|
||||
"source": [
|
||||
"# Uses default values for PythonScriptStep construct.\n",
|
||||
"\n",
|
||||
"source_directory = './train'\n",
|
||||
"print('Source directory for the step is {}.'.format(os.path.realpath(source_directory)))\n",
|
||||
"\n",
|
||||
"# Syntax\n",
|
||||
"# PythonScriptStep(\n",
|
||||
"# script_name, \n",
|
||||
@@ -321,7 +298,7 @@
|
||||
"step1 = PythonScriptStep(name=\"train_step\",\n",
|
||||
" script_name=\"train.py\", \n",
|
||||
" compute_target=aml_compute, \n",
|
||||
" source_directory=project_folder,\n",
|
||||
" source_directory=source_directory,\n",
|
||||
" allow_reuse=True)\n",
|
||||
"print(\"Step1 created\")"
|
||||
]
|
||||
@@ -351,12 +328,15 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# All steps use files already available in the project_folder\n",
|
||||
"# For this step, we use a different source_directory\n",
|
||||
"source_directory = './compare'\n",
|
||||
"print('Source directory for the step is {}.'.format(os.path.realpath(source_directory)))\n",
|
||||
"\n",
|
||||
"# All steps use the same Azure Machine Learning compute target as well\n",
|
||||
"step2 = PythonScriptStep(name=\"compare_step\",\n",
|
||||
" script_name=\"compare.py\", \n",
|
||||
" compute_target=aml_compute, \n",
|
||||
" source_directory=project_folder)\n",
|
||||
" source_directory=source_directory)\n",
|
||||
"\n",
|
||||
"# Use a RunConfiguration to specify some additional requirements for this step.\n",
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
@@ -378,10 +358,14 @@
|
||||
"# specify CondaDependencies obj\n",
|
||||
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
|
||||
"\n",
|
||||
"# For this step, we use yet another source_directory\n",
|
||||
"source_directory = './extract'\n",
|
||||
"print('Source directory for the step is {}.'.format(os.path.realpath(source_directory)))\n",
|
||||
"\n",
|
||||
"step3 = PythonScriptStep(name=\"extract_step\",\n",
|
||||
" script_name=\"extract.py\", \n",
|
||||
" compute_target=aml_compute, \n",
|
||||
" source_directory=project_folder,\n",
|
||||
" source_directory=source_directory,\n",
|
||||
" runconfig=run_config)\n",
|
||||
"\n",
|
||||
"# list of steps to run\n",
|
||||
@@ -597,7 +581,7 @@
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "diray"
|
||||
"name": "sanpil"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
|
||||
@@ -22,9 +22,17 @@
|
||||
"# Azure Machine Learning Pipeline with HyperDriveStep\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"This notebook is used to demonstrate the use of HyperDriveStep in AML Pipeline.\n",
|
||||
"This notebook is used to demonstrate the use of HyperDriveStep in AML Pipeline."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites and Azure Machine Learning Basics\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. \n",
|
||||
"\n",
|
||||
"## Azure Machine Learning and Pipeline SDK-specific imports\n"
|
||||
"## Azure Machine Learning and Pipeline SDK-specific imports"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -33,19 +41,24 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import shutil\n",
|
||||
"import urllib\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core import Workspace, Experiment\n",
|
||||
"from azureml.core.datastore import Datastore\n",
|
||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||
"from azureml.exceptions import ComputeTargetException\n",
|
||||
"from azureml.data.data_reference import DataReference\n",
|
||||
"from azureml.pipeline.steps import HyperDriveStep\n",
|
||||
"from azureml.pipeline.steps import HyperDriveStep, HyperDriveStepRun\n",
|
||||
"from azureml.pipeline.core import Pipeline, PipelineData\n",
|
||||
"from azureml.train.dnn import TensorFlow\n",
|
||||
"from azureml.train.hyperdrive import *\n",
|
||||
"# from azureml.train.hyperdrive import *\n",
|
||||
"from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal\n",
|
||||
"from azureml.train.hyperdrive import choice, loguniform\n",
|
||||
"\n",
|
||||
"import os\n",
|
||||
"import shutil\n",
|
||||
"import urllib\n",
|
||||
"import numpy as np\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
@@ -87,7 +100,7 @@
|
||||
"script_folder = './tf-mnist'\n",
|
||||
"os.makedirs(script_folder, exist_ok=True)\n",
|
||||
"\n",
|
||||
"exp = Experiment(workspace=ws, name='tf-mnist')"
|
||||
"exp = Experiment(workspace=ws, name='Hyperdrive_sample')"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -112,6 +125,42 @@
|
||||
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename = './data/mnist/test-labels.gz')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Show some sample images\n",
|
||||
"Let's load the downloaded compressed file into numpy arrays using some utility functions included in the `utils.py` library file from the current folder. Then we use `matplotlib` to plot 30 random images from the dataset along with their labels."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from utils import load_data\n",
|
||||
"\n",
|
||||
"# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster.\n",
|
||||
"X_train = load_data('./data/mnist/train-images.gz', False) / 255.0\n",
|
||||
"y_train = load_data('./data/mnist/train-labels.gz', True).reshape(-1)\n",
|
||||
"\n",
|
||||
"X_test = load_data('./data/mnist/test-images.gz', False) / 255.0\n",
|
||||
"y_test = load_data('./data/mnist/test-labels.gz', True).reshape(-1)\n",
|
||||
"\n",
|
||||
"count = 0\n",
|
||||
"sample_size = 30\n",
|
||||
"plt.figure(figsize = (16, 6))\n",
|
||||
"for i in np.random.permutation(X_train.shape[0])[:sample_size]:\n",
|
||||
" count = count + 1\n",
|
||||
" plt.subplot(1, sample_size, count)\n",
|
||||
" plt.axhline('')\n",
|
||||
" plt.axvline('')\n",
|
||||
" plt.text(x = 10, y = -10, s = y_train[i], fontsize = 18)\n",
|
||||
" plt.imshow(X_train[i].reshape(28, 28), cmap = plt.cm.Greys)\n",
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -135,7 +184,8 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Retrieve or create a Azure Machine Learning compute\n",
|
||||
"Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's get the default Azure Machine Learning Compute in the current workspace. We will then run the training script on this compute target."
|
||||
"Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads.\n",
|
||||
"Let's check the available computes first."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -144,7 +194,27 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"compute_target = ws.get_default_compute_target(\"GPU\")"
|
||||
"cts = ws.compute_targets\n",
|
||||
"for name, ct in cts.items():\n",
|
||||
" print(name, ct.type, ct.provisioning_state)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now let's get the default Azure Machine Learning Compute in the current workspace. We will then run the training script on this compute target."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"compute_target = ws.get_default_compute_target(\"GPU\")\n",
|
||||
"\n",
|
||||
"print(compute_target.get_status().serialize())"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -173,8 +243,12 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create TensorFlow estimator\n",
|
||||
"Next, we construct an `azureml.train.dnn.TensorFlow` estimator object, use the Batch AI cluster as compute target, and pass the mount-point of the datastore to the training code as a parameter.\n",
|
||||
"The TensorFlow estimator is providing a simple way of launching a TensorFlow training job on a compute target. It will automatically provide a docker image that has TensorFlow installed -- if additional pip or conda packages are required, their names can be passed in via the `pip_packages` and `conda_packages` arguments and they will be included in the resulting docker."
|
||||
"Next, we construct an [TensorFlow](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) estimator object.\n",
|
||||
"The TensorFlow estimator is providing a simple way of launching a TensorFlow training job on a compute target. It will automatically provide a docker image that has TensorFlow installed -- if additional pip or conda packages are required, their names can be passed in via the `pip_packages` and `conda_packages` arguments and they will be included in the resulting docker.\n",
|
||||
"\n",
|
||||
"The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.\n",
|
||||
"\n",
|
||||
"The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -186,7 +260,8 @@
|
||||
"est = TensorFlow(source_directory=script_folder, \n",
|
||||
" compute_target=compute_target,\n",
|
||||
" entry_script='tf_mnist.py', \n",
|
||||
" use_gpu=True)"
|
||||
" use_gpu=True,\n",
|
||||
" framework_version='1.13')"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -194,7 +269,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Intelligent hyperparameter tuning\n",
|
||||
"We have trained the model with one set of hyperparameters, now let's how we can do hyperparameter tuning by launching multiple runs on the cluster. First let's define the parameter space using random sampling.\n",
|
||||
"Now let's try hyperparameter tuning by launching multiple runs on the cluster. First let's define the parameter space using random sampling.\n",
|
||||
"\n",
|
||||
"In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, the best validation accuracy (`validation_acc`)."
|
||||
]
|
||||
@@ -251,8 +326,8 @@
|
||||
" policy=early_termination_policy,\n",
|
||||
" primary_metric_name='validation_acc', \n",
|
||||
" primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, \n",
|
||||
" max_total_runs=1,\n",
|
||||
" max_concurrent_runs=1)"
|
||||
" max_total_runs=10,\n",
|
||||
" max_concurrent_runs=4)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -261,6 +336,7 @@
|
||||
"source": [
|
||||
"## Add HyperDrive as a step of pipeline\n",
|
||||
"\n",
|
||||
"### Setup an input for the hypderdrive step\n",
|
||||
"Let's setup a data reference for inputs of hyperdrive step."
|
||||
]
|
||||
},
|
||||
@@ -302,8 +378,9 @@
|
||||
" datastore=ds,\n",
|
||||
" pipeline_output_name=metrics_output_name)\n",
|
||||
"\n",
|
||||
"hd_step_name='hd_step01'\n",
|
||||
"hd_step = HyperDriveStep(\n",
|
||||
" name=\"hyperdrive_module\",\n",
|
||||
" name=hd_step_name,\n",
|
||||
" hyperdrive_config=hd_config,\n",
|
||||
" estimator_entry_script_arguments=['--data-folder', data_folder],\n",
|
||||
" inputs=[data_folder],\n",
|
||||
@@ -324,7 +401,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline = Pipeline(workspace=ws, steps=[hd_step])\n",
|
||||
"pipeline_run = Experiment(ws, 'Hyperdrive_Test').submit(pipeline)"
|
||||
"pipeline_run = exp.submit(pipeline)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -357,7 +434,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline_run.wait_for_completion()"
|
||||
"# PUBLISHONLY\n",
|
||||
"# pipeline_run.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -374,8 +452,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"metrics_output = pipeline_run.get_pipeline_output(metrics_output_name)\n",
|
||||
"num_file_downloaded = metrics_output.download('.', show_progress=True)"
|
||||
"# PUBLISHONLY\n",
|
||||
"# metrics_output = pipeline_run.get_pipeline_output(metrics_output_name)\n",
|
||||
"# num_file_downloaded = metrics_output.download('.', show_progress=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -384,14 +463,374 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import pandas as pd\n",
|
||||
"import json\n",
|
||||
"with open(metrics_output._path_on_datastore) as f: \n",
|
||||
" metrics_output_result = f.read()\n",
|
||||
"# PUBLISHONLY\n",
|
||||
"# import pandas as pd\n",
|
||||
"# import json\n",
|
||||
"# with open(metrics_output._path_on_datastore) as f: \n",
|
||||
"# metrics_output_result = f.read()\n",
|
||||
" \n",
|
||||
"deserialized_metrics_output = json.loads(metrics_output_result)\n",
|
||||
"df = pd.DataFrame(deserialized_metrics_output)\n",
|
||||
"df"
|
||||
"# deserialized_metrics_output = json.loads(metrics_output_result)\n",
|
||||
"# df = pd.DataFrame(deserialized_metrics_output)\n",
|
||||
"# df"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Find and register best model\n",
|
||||
"When all the jobs finish, we can find out the one that has the highest accuracy."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# hd_step_run = HyperDriveStepRun(step_run=pipeline_run.find_step_run(hd_step_name)[0])\n",
|
||||
"# best_run = hd_step_run.get_best_run_by_primary_metric()\n",
|
||||
"# best_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now let's list the model files uploaded during the run."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# print(best_run.get_file_names())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can then register the folder (and all files in it) as a model named `tf-dnn-mnist` under the workspace for deployment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# model = best_run.register_model(model_name='tf-dnn-mnist', model_path='outputs/model')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy the model in ACI\n",
|
||||
"Now we are ready to deploy the model as a web service running in Azure Container Instance [ACI](https://azure.microsoft.com/en-us/services/container-instances/). Azure Machine Learning accomplishes this by constructing a Docker image with the scoring logic and model baked in.\n",
|
||||
"### Create score.py\n",
|
||||
"First, we will create a scoring script that will be invoked by the web service call. \n",
|
||||
"\n",
|
||||
"* Note that the scoring script must have two required functions, `init()` and `run(input_data)`. \n",
|
||||
" * In `init()` function, you typically load the model into a global object. This function is executed only once when the Docker container is started. \n",
|
||||
" * In `run(input_data)` function, the model is used to predict a value based on the input data. The input and output to `run` typically use JSON as serialization and de-serialization format but you are not limited to that."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%writefile score.py\n",
|
||||
"import json\n",
|
||||
"import numpy as np\n",
|
||||
"import os\n",
|
||||
"import tensorflow as tf\n",
|
||||
"\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"\n",
|
||||
"def init():\n",
|
||||
" global X, output, sess\n",
|
||||
" tf.reset_default_graph()\n",
|
||||
" model_root = Model.get_model_path('tf-dnn-mnist')\n",
|
||||
" saver = tf.train.import_meta_graph(os.path.join(model_root, 'mnist-tf.model.meta'))\n",
|
||||
" X = tf.get_default_graph().get_tensor_by_name(\"network/X:0\")\n",
|
||||
" output = tf.get_default_graph().get_tensor_by_name(\"network/output/MatMul:0\")\n",
|
||||
" \n",
|
||||
" sess = tf.Session()\n",
|
||||
" saver.restore(sess, os.path.join(model_root, 'mnist-tf.model'))\n",
|
||||
"\n",
|
||||
"def run(raw_data):\n",
|
||||
" data = np.array(json.loads(raw_data)['data'])\n",
|
||||
" # make prediction\n",
|
||||
" out = output.eval(session=sess, feed_dict={X: data})\n",
|
||||
" y_hat = np.argmax(out, axis=1)\n",
|
||||
" return y_hat.tolist()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create myenv.yml\n",
|
||||
"We also need to create an environment file so that Azure Machine Learning can install the necessary packages in the Docker image which are required by your scoring script. In this case, we need to specify packages `numpy`, `tensorflow`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# from azureml.core.runconfig import CondaDependencies\n",
|
||||
"\n",
|
||||
"# cd = CondaDependencies.create()\n",
|
||||
"# cd.add_conda_package('numpy')\n",
|
||||
"# cd.add_tensorflow_conda_package()\n",
|
||||
"# cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')\n",
|
||||
"\n",
|
||||
"# print(cd.serialize_to_string())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deploy to ACI\n",
|
||||
"We are almost ready to deploy. Create a deployment configuration and specify the number of CPUs and gigbyte of RAM needed for your ACI container. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# from azureml.core.webservice import AciWebservice\n",
|
||||
"\n",
|
||||
"# aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, \n",
|
||||
"# memory_gb=1, \n",
|
||||
"# tags={'name':'mnist', 'framework': 'TensorFlow DNN'},\n",
|
||||
"# description='Tensorflow DNN on MNIST')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Deployment Process\n",
|
||||
"Now we can deploy. **This cell will run for about 7-8 minutes**. Behind the scene, it will do the following:\n",
|
||||
"1. **Register model** \n",
|
||||
"Take the local `model` folder (which contains our previously downloaded trained model files) and register it (and the files inside that folder) as a model named `model` under the workspace. Azure ML will register the model directory or model file(s) we specify to the `model_paths` parameter of the `Webservice.deploy` call.\n",
|
||||
"2. **Build Docker image** \n",
|
||||
"Build a Docker image using the scoring file (`score.py`), the environment file (`myenv.yml`), and the `model` folder containing the TensorFlow model files. \n",
|
||||
"3. **Register image** \n",
|
||||
"Register that image under the workspace. \n",
|
||||
"4. **Ship to ACI** \n",
|
||||
"And finally ship the image to the ACI infrastructure, start up a container in ACI using that image, and expose an HTTP endpoint to accept REST client calls."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# from azureml.core.image import ContainerImage\n",
|
||||
"\n",
|
||||
"# imgconfig = ContainerImage.image_configuration(execution_script=\"score.py\", \n",
|
||||
"# runtime=\"python\", \n",
|
||||
"# conda_file=\"myenv.yml\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# %%time\n",
|
||||
"# from azureml.core.webservice import Webservice\n",
|
||||
"\n",
|
||||
"# service = Webservice.deploy_from_model(workspace=ws,\n",
|
||||
"# name='tf-mnist-svc',\n",
|
||||
"# deployment_config=aciconfig,\n",
|
||||
"# models=[model],\n",
|
||||
"# image_config=imgconfig)\n",
|
||||
"\n",
|
||||
"# service.wait_for_deployment(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# print(service.get_logs())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This is the scoring web service endpoint:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# print(service.scoring_uri)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Test the deployed model\n",
|
||||
"Let's test the deployed model. Pick 30 random samples from the test set, and send it to the web service hosted in ACI. Note here we are using the `run` API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.\n",
|
||||
"\n",
|
||||
"After the invocation, we print the returned predictions and plot them along with the input images. Use red font color and inversed image (white on black) to highlight the misclassified samples. Note since the model accuracy is pretty high, you might have to run the below cell a few times before you can see a misclassified sample."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# import json\n",
|
||||
"\n",
|
||||
"# # find 30 random samples from test set\n",
|
||||
"# n = 30\n",
|
||||
"# sample_indices = np.random.permutation(X_test.shape[0])[0:n]\n",
|
||||
"\n",
|
||||
"# test_samples = json.dumps({\"data\": X_test[sample_indices].tolist()})\n",
|
||||
"# test_samples = bytes(test_samples, encoding='utf8')\n",
|
||||
"\n",
|
||||
"# # predict using the deployed model\n",
|
||||
"# result = service.run(input_data=test_samples)\n",
|
||||
"\n",
|
||||
"# # compare actual value vs. the predicted values:\n",
|
||||
"# i = 0\n",
|
||||
"# plt.figure(figsize = (20, 1))\n",
|
||||
"\n",
|
||||
"# for s in sample_indices:\n",
|
||||
"# plt.subplot(1, n, i + 1)\n",
|
||||
"# plt.axhline('')\n",
|
||||
"# plt.axvline('')\n",
|
||||
" \n",
|
||||
"# # use different color for misclassified sample\n",
|
||||
"# font_color = 'red' if y_test[s] != result[i] else 'black'\n",
|
||||
"# clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys\n",
|
||||
" \n",
|
||||
"# plt.text(x=10, y=-10, s=y_hat[s], fontsize=18, color=font_color)\n",
|
||||
"# plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map)\n",
|
||||
" \n",
|
||||
"# i = i + 1\n",
|
||||
"# plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can also send raw HTTP request to the service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# import requests\n",
|
||||
"\n",
|
||||
"# # send a random row from the test set to score\n",
|
||||
"# random_index = np.random.randint(0, len(X_test)-1)\n",
|
||||
"# input_data = \"{\\\"data\\\": [\" + str(list(X_test[random_index])) + \"]}\"\n",
|
||||
"\n",
|
||||
"# headers = {'Content-Type':'application/json'}\n",
|
||||
"\n",
|
||||
"# resp = requests.post(service.scoring_uri, input_data, headers=headers)\n",
|
||||
"\n",
|
||||
"# print(\"POST to url\", service.scoring_uri)\n",
|
||||
"# print(\"input data:\", input_data)\n",
|
||||
"# print(\"label:\", y_test[random_index])\n",
|
||||
"# print(\"prediction:\", resp.text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's look at the workspace after the web service was deployed. You should see \n",
|
||||
"* a registered model named 'model' and with the id 'model:1'\n",
|
||||
"* an image called 'tf-mnist' and with a docker image location pointing to your workspace's Azure Container Registry (ACR) \n",
|
||||
"* a webservice called 'tf-mnist' with some scoring URL"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# models = ws.models\n",
|
||||
"# for name, model in models.items():\n",
|
||||
"# print(\"Model: {}, ID: {}\".format(name, model.id))\n",
|
||||
" \n",
|
||||
"# images = ws.images\n",
|
||||
"# for name, image in images.items():\n",
|
||||
"# print(\"Image: {}, location: {}\".format(name, image.image_location))\n",
|
||||
" \n",
|
||||
"# webservices = ws.webservices\n",
|
||||
"# for name, webservice in webservices.items():\n",
|
||||
"# print(\"Webservice: {}, scoring URI: {}\".format(name, webservice.scoring_uri))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Clean up\n",
|
||||
"You can delete the ACI deployment with a simple delete API call."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# PUBLISHONLY\n",
|
||||
"# service.delete()"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -63,7 +63,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"aml_compute = ws.get_default_compute_target(\"CPU\")"
|
||||
"aml_compute_target = ws.get_default_compute_target(\"CPU\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -284,7 +284,10 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Disable the schedule"
|
||||
"### Disable the schedule\n",
|
||||
"It is important to note the best practice of disabling schedules when not in use.\n",
|
||||
"The number of schedule triggers allowed per month per region per subscription is 100,000.\n",
|
||||
"This is calculated using the project trigger counts for all active schedules."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -127,7 +127,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Retrieve or create a Aml compute\n",
|
||||
"#### Retrieve or create an Aml compute\n",
|
||||
"Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's get the default Aml Compute in the current workspace. We will then run the training script on this compute target."
|
||||
]
|
||||
},
|
||||
@@ -418,6 +418,59 @@
|
||||
"RunDetails(pipeline_run1).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### See Outputs\n",
|
||||
"\n",
|
||||
"See where outputs of each pipeline step are located on your datastore."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Get Steps\n",
|
||||
"for step in pipeline_run1.get_steps():\n",
|
||||
" print(\"Outputs of step \" + step.name)\n",
|
||||
" \n",
|
||||
" # Get a dictionary of StepRunOutputs with the output name as the key \n",
|
||||
" output_dict = step.get_outputs()\n",
|
||||
" \n",
|
||||
" for name, output in output_dict.items():\n",
|
||||
" \n",
|
||||
" output_reference = output.get_port_data_reference() # Get output port data reference\n",
|
||||
" print(\"\\tname: \" + name)\n",
|
||||
" print(\"\\tdatastore: \" + output_reference.datastore_name)\n",
|
||||
" print(\"\\tpath on datastore: \" + output_reference.path_on_datastore)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Download Outputs\n",
|
||||
"\n",
|
||||
"We can download the output of any step to our local machine using the SDK."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Retrieve the step runs by name 'train.py'\n",
|
||||
"train_step = pipeline_run1.find_step_run('train.py')\n",
|
||||
"\n",
|
||||
"if train_step:\n",
|
||||
" train_step_obj = train_step[0] # since we have only one step by name 'train.py'\n",
|
||||
" train_step_obj.get_output_data('processed_data1').download(\"./outputs\") # download the output to current directory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -293,7 +293,7 @@
|
||||
"source": [
|
||||
"from azureml.core.runconfig import DEFAULT_GPU_IMAGE\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(pip_packages=[\"tensorflow-gpu==1.10.0\", \"azureml-defaults\"])\n",
|
||||
"cd = CondaDependencies.create(pip_packages=[\"tensorflow-gpu==1.13.1\", \"azureml-defaults\"])\n",
|
||||
"\n",
|
||||
"# Runconfig\n",
|
||||
"amlcompute_run_config = RunConfiguration(conda_dependencies=cd)\n",
|
||||
|
||||
@@ -223,7 +223,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, Chainer and its dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `Chainer` constructor's `pip_packages` or `conda_packages` parameters."
|
||||
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI, you must provide the argument `distributed_training=MpiConfiguration()`. Using this estimator with these settings, Chainer and its dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `Chainer` constructor's `pip_packages` or `conda_packages` parameters."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -270,6 +270,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import MpiConfiguration\n",
|
||||
"from azureml.train.estimator import Estimator\n",
|
||||
"\n",
|
||||
"script_params = {\n",
|
||||
@@ -283,8 +284,7 @@
|
||||
" entry_script='cntk_distr_mnist.py',\n",
|
||||
" script_params=script_params,\n",
|
||||
" node_count=2,\n",
|
||||
" process_count_per_node=1,\n",
|
||||
" distributed_backend='mpi',\n",
|
||||
" distributed_training=MpiConfiguration(),\n",
|
||||
" pip_packages=['cntk-gpu==2.6'],\n",
|
||||
" custom_docker_image='microsoft/mmlspark:gpu-0.12',\n",
|
||||
" use_gpu=True)"
|
||||
@@ -296,7 +296,7 @@
|
||||
"source": [
|
||||
"We would like to train our model using a [pre-built Docker container](https://hub.docker.com/r/microsoft/mmlspark/). To do so, specify the name of the docker image to the argument `custom_docker_image`. Finally, we provide the `cntk` package to `pip_packages` to install CNTK 2.6 on our custom image.\n",
|
||||
"\n",
|
||||
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to run distributed CNTK, which uses MPI, you must provide the argument `distributed_backend='mpi'`."
|
||||
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to run distributed CNTK, which uses MPI, you must provide the argument `distributed_training=MpiConfiguration()`."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -236,7 +236,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI/Horovod, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, PyTorch, Horovod and their dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `PyTorch` constructor's `pip_packages` or `conda_packages` parameters."
|
||||
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI/Horovod, you must provide the argument `distributed_training=MpiConfiguration()`. Using this estimator with these settings, PyTorch, Horovod and their dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `PyTorch` constructor's `pip_packages` or `conda_packages` parameters."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -304,7 +304,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI/Horovod, you must provide the argument `distributed_backend='mpi'`. Using this estimator with these settings, TensorFlow, Horovod and their dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `TensorFlow` constructor's `pip_packages` or `conda_packages` parameters.\n",
|
||||
"The above code specifies that we will run our training script on `2` nodes, with one worker per node. In order to execute a distributed run using MPI/Horovod, you must provide the argument `distributed_training=MpiConfiguration()`. Using this estimator with these settings, TensorFlow, Horovod and their dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `TensorFlow` constructor's `pip_packages` or `conda_packages` parameters.\n",
|
||||
"\n",
|
||||
"Note that we passed our training data reference `ds_data` to our script's `--input_data` argument. This will 1) mount our datastore on the remote compute and 2) provide the path to the data zip file on our datastore."
|
||||
]
|
||||
|
||||
@@ -220,7 +220,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above code specifies that we will run our training script on `2` nodes, with two workers and one parameter server. In order to execute a native distributed TensorFlow run, you must provide the argument `distributed_backend='ps'`. Using this estimator with these settings, TensorFlow and its dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `TensorFlow` constructor's `pip_packages` or `conda_packages` parameters."
|
||||
"The above code specifies that we will run our training script on `2` nodes, with two workers and one parameter server. In order to execute a native distributed TensorFlow run, you must provide the argument `distributed_training=TensorflowConfiguration()`. Using this estimator with these settings, TensorFlow and its dependencies will be installed for you. However, if your script also uses other packages, make sure to install them via the `TensorFlow` constructor's `pip_packages` or `conda_packages` parameters."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -312,6 +312,35 @@
|
||||
"RunDetails(run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In addition to passing in a python file, you can also pass in a Jupyter notebook as the `entry_script`. [notebook_example.ipynb](notebook_example.ipynb) uses pm.record() to log key-value pairs which will appear in Azure Portal and shown in below widget.\n",
|
||||
"\n",
|
||||
"In order to run below, make sure `azureml-contrib-notebook` package is installed in current environment with `pip intall azureml-contrib-notebook`.\n",
|
||||
"\n",
|
||||
"This code snippet specifies the following parameters to the `Estimator` constructor. For more information on `Estimator`, please see [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-train-ml-models) or [API doc](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py).\n",
|
||||
"\n",
|
||||
"| Parameter | Description |\n",
|
||||
"| ------------- | ------------- |\n",
|
||||
"| source_directory | (str) Local directory that contains all of your code needed for the training job. This folder gets copied from your local machine to the remote compute |\n",
|
||||
"| compute_target | ([AbstractComputeTarget](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute_target.abstractcomputetarget?view=azure-ml-py) or str) Remote compute target that your training script will run on, in this case a previously created persistent compute cluster (cpu_cluster) |\n",
|
||||
"| entry_script | (str) Filepath (relative to the source_directory) of the training script/notebook to be run on the remote compute. This file, and any additional files it depends on, should be located in this folder |\n",
|
||||
"| script_params | (dict) A dictionary containing parameters to the `entry_script`. This is useful for passing datastore reference, for example, see [train-hyperparameter-tune-deploy-with-tensorflow.ipynb](../train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) |"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"est = Estimator(source_directory='.', compute_target=cpu_cluster, entry_script='notebook_example.ipynb', pip_packages=['nteract-scrapbook', 'azureml-contrib-notebook'])\n",
|
||||
"run = exp.submit(est)\n",
|
||||
"RunDetails(run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -424,7 +453,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"When all the runs complete, we can find the run with the best performance"
|
||||
"When all the runs complete, we can find the run with the best performance."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -475,7 +504,7 @@
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.8"
|
||||
},
|
||||
"msauthor": "minxia"
|
||||
"msauthor": "jingywa"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
|
||||
@@ -0,0 +1,57 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import scrapbook as sb\n",
|
||||
"sb.glue('Fibonacci numbers', [0, 1, 1, 2, 3, 5, 8, 13, 21, 34])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "jingywa"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.8"
|
||||
},
|
||||
"msauthor": "jingywa"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -425,7 +425,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Monitor the Run\n",
|
||||
"### Monitor the Run <a class=\"anchor\" id=\"monitor-run\"></a>\n",
|
||||
"As the Run is executed, it will go through the following stages:\n",
|
||||
"1. Preparing: A docker image is created matching the Python environment specified by the TensorFlow estimator and it will be uploaded to the workspace's Azure Container Registry. This step will only happen once for each Python environment -- the container will then be cached for subsequent runs. Creating and uploading the image takes about **5 minutes**. While the job is preparing, logs are streamed to the run history and can be viewed to monitor the progress of the image creation.\n",
|
||||
"\n",
|
||||
@@ -480,7 +480,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### The Run object\n",
|
||||
"### The Run object <a class=\"anchor\" id=\"run-object\"></a>\n",
|
||||
"The Run object provides the interface to the run history -- both to the job and to the control plane (this notebook), and both while the job is running and after it has completed. It provides a number of interesting features for instance:\n",
|
||||
"* `run.get_details()`: Provides a rich set of properties of the run\n",
|
||||
"* `run.get_metrics()`: Provides a dictionary with all the metrics that were reported for the Run\n",
|
||||
@@ -778,7 +778,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Find and register best model\n",
|
||||
"## Find and register best model <a class=\"anchor\" id=\"register-model\"></a>\n",
|
||||
"When all the jobs finish, we can find out the one that has the highest accuracy."
|
||||
]
|
||||
},
|
||||
@@ -1140,7 +1140,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
"version": "3.6.8"
|
||||
},
|
||||
"msauthor": "minxia"
|
||||
},
|
||||
|
||||
@@ -8,5 +8,6 @@ Follow these sample notebooks to learn:
|
||||
4. [Train on AmlCompute](train-on-amlcompute): train a model using an AmlCompute cluster as compute target.
|
||||
5. [Train in an HDI Spark cluster](train-in-spark): train a Spark ML model using an HDInsight Spark cluster as compute target.
|
||||
6. [Logging API](logging-api): experiment with various logging functions to create runs and automatically generate graphs.
|
||||
7. [Train and hyperparameter tune on Iris Dataset with Scikit-learn](train-hyperparameter-tune-deploy-with-sklearn): train a model using the Scikit-learn estimator and tune hyperparameters with Hyperdrive.
|
||||
|
||||

|
||||
@@ -100,7 +100,7 @@
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
"\n",
|
||||
"print(\"This notebook was created using SDK version 1.0.39, you are currently running version\", azureml.core.VERSION)"
|
||||
"print(\"This notebook was created using SDK version 1.0.43, you are currently running version\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -0,0 +1,484 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Train and hyperparameter tune on Iris Dataset with Scikit-learn\n",
|
||||
"In this tutorial, we demonstrate how to use the Azure ML Python SDK to train a support vector machine (SVM) on a single-node CPU with Scikit-learn to perform classification on the popular [Iris dataset](https://archive.ics.uci.edu/ml/datasets/iris). We will also demonstrate how to perform hyperparameter tuning of the model using Azure ML's HyperDrive service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML Workspace"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Diagnostics"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Opt-in diagnostics for better experience, quality, and security of future releases."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.telemetry import set_diagnostics_collection\n",
|
||||
"\n",
|
||||
"set_diagnostics_collection(send_diagnostics=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize workspace"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get default AmlCompute"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource.\n",
|
||||
"\n",
|
||||
"As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"compute_target = ws.get_default_compute_target(type=\"CPU\")\n",
|
||||
"\n",
|
||||
"# use get_status() to get a detailed status for the current cluster. \n",
|
||||
"print(compute_target.get_status().serialize())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above code retrieves the default CPU compute."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train model on the remote compute"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that you have your data and training script prepared, you are ready to train on your remote compute cluster. You can take advantage of Azure compute to leverage GPUs to cut down your training time."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a project directory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"project_folder = './sklearn-iris'\n",
|
||||
"os.makedirs(project_folder, exist_ok=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Prepare training script"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now you will need to create your training script. In this tutorial, the training script is already provided for you at `train_iris`.py. In practice, you should be able to take any custom training script as is and run it with Azure ML without having to modify your code.\n",
|
||||
"\n",
|
||||
"However, if you would like to use Azure ML's [tracking and metrics](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#metrics) capabilities, you will have to add a small amount of Azure ML code inside your training script.\n",
|
||||
"\n",
|
||||
"In `train_iris.py`, we will log some metrics to our Azure ML run. To do so, we will access the Azure ML Run object within the script:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"from azureml.core.run import Run\n",
|
||||
"run = Run.get_context()\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Further within `train_iris.py`, we log the kernel and penalty parameters, and the highest accuracy the model achieves:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"run.log('Kernel type', np.string(args.kernel))\n",
|
||||
"run.log('Penalty', np.float(args.penalty))\n",
|
||||
"\n",
|
||||
"run.log('Accuracy', np.float(accuracy))\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"These run metrics will become particularly important when we begin hyperparameter tuning our model in the \"Tune model hyperparameters\" section.\n",
|
||||
"\n",
|
||||
"Once your script is ready, copy the training script `train_iris.py` into your project directory."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import shutil\n",
|
||||
"\n",
|
||||
"shutil.copy('train_iris.py', project_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create an experiment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this Scikit-learn tutorial."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Experiment\n",
|
||||
"\n",
|
||||
"experiment_name = 'train_iris'\n",
|
||||
"experiment = Experiment(ws, name=experiment_name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a Scikit-learn estimator"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The Azure ML SDK's Scikit-learn estimator enables you to easily submit Scikit-learn training jobs for single-node runs. The following code will define a single-node Scikit-learn job."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.train.sklearn import SKLearn\n",
|
||||
"\n",
|
||||
"script_params = {\n",
|
||||
" '--kernel': 'linear',\n",
|
||||
" '--penalty': 1.0,\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"estimator = SKLearn(source_directory=project_folder, \n",
|
||||
" script_params=script_params,\n",
|
||||
" compute_target=compute_target,\n",
|
||||
" entry_script='train_iris.py'\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `script_params` parameter is a dictionary containing the command-line arguments to your training script `entry_script`. To leverage the Azure VM's GPU for training, we set `use_gpu=True`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Submit job"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run your experiment by submitting your estimator object. Note that this call is asynchronous."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run = experiment.submit(estimator)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Monitor your run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.widgets import RunDetails\n",
|
||||
"\n",
|
||||
"RunDetails(run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run.cancel()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tune model hyperparameters"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now that we've seen how to do a simple Scikit-learn training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Start a hyperparameter sweep"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"First, we will define the hyperparameter space to sweep over. Let's tune the `kernel` and `penalty` parameters. In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, `Accuracy`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.train.hyperdrive.runconfig import HyperDriveRunConfig\n",
|
||||
"from azureml.train.hyperdrive.sampling import RandomParameterSampling\n",
|
||||
"from azureml.train.hyperdrive.run import PrimaryMetricGoal\n",
|
||||
"from azureml.train.hyperdrive.parameter_expressions import choice\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"param_sampling = RandomParameterSampling( {\n",
|
||||
" \"--kernel\": choice('linear', 'rbf', 'poly', 'sigmoid'),\n",
|
||||
" \"--penalty\": choice(0.5, 1, 1.5)\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"hyperdrive_run_config = HyperDriveRunConfig(estimator=estimator,\n",
|
||||
" hyperparameter_sampling=param_sampling, \n",
|
||||
" primary_metric_name='Accuracy',\n",
|
||||
" primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,\n",
|
||||
" max_total_runs=12,\n",
|
||||
" max_concurrent_runs=4)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, lauch the hyperparameter tuning job."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# start the HyperDrive run\n",
|
||||
"hyperdrive_run = experiment.submit(hyperdrive_run_config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Monitor HyperDrive runs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can monitor the progress of the runs with the following Jupyter widget."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"RunDetails(hyperdrive_run).show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "dipeck"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.5.2"
|
||||
},
|
||||
"msauthor": "dipeck"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -0,0 +1,53 @@
|
||||
# Modified from https://www.geeksforgeeks.org/multiclass-classification-using-scikit-learn/
|
||||
|
||||
import argparse
|
||||
|
||||
# importing necessary libraries
|
||||
import numpy as np
|
||||
|
||||
from sklearn import datasets
|
||||
from sklearn.metrics import confusion_matrix
|
||||
from sklearn.model_selection import train_test_split
|
||||
|
||||
from azureml.core.run import Run
|
||||
run = Run.get_context()
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser()
|
||||
|
||||
parser.add_argument('--kernel', type=str, default='linear',
|
||||
help='Kernel type to be used in the algorithm')
|
||||
parser.add_argument('--penalty', type=float, default=1.0,
|
||||
help='Penalty parameter of the error term')
|
||||
|
||||
args = parser.parse_args()
|
||||
run.log('Kernel type', np.string(args.kernel))
|
||||
run.log('Penalty', np.float(args.penalty))
|
||||
|
||||
# loading the iris dataset
|
||||
iris = datasets.load_iris()
|
||||
|
||||
# X -> features, y -> label
|
||||
X = iris.data
|
||||
y = iris.target
|
||||
|
||||
# dividing X, y into train and test data
|
||||
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
|
||||
|
||||
# training a linear SVM classifier
|
||||
from sklearn.svm import SVC
|
||||
svm_model_linear = SVC(kernel=args.kernel, C=args.penalty).fit(X_train, y_train)
|
||||
svm_predictions = svm_model_linear.predict(X_test)
|
||||
|
||||
# model accuracy for X_test
|
||||
accuracy = svm_model_linear.score(X_test, y_test)
|
||||
print('Accuracy of SVM classifier on test set: {:.2f}'.format(accuracy))
|
||||
run.log('Accuracy', np.float(accuracy))
|
||||
# creating a confusion matrix
|
||||
cm = confusion_matrix(y_test, svm_predictions)
|
||||
print(cm)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -164,6 +164,30 @@
|
||||
"shutil.copy('train.py', project_folder)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create environment\n",
|
||||
"\n",
|
||||
"Create Docker based environment with scikit-learn installed."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"myenv = Environment(\"myenv\")\n",
|
||||
"\n",
|
||||
"myenv.docker.enabled = True\n",
|
||||
"myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -179,38 +203,32 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"from azureml.core import ScriptRunConfig\n",
|
||||
"from azureml.core.runconfig import DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"# create a new runconfig object\n",
|
||||
"run_config = RunConfiguration()\n",
|
||||
"src = ScriptRunConfig(source_directory=project_folder, script='train.py')\n",
|
||||
"\n",
|
||||
"default_compute_target = ws.get_default_compute_target(type=\"CPU\")\n",
|
||||
"# Use default compute target\n",
|
||||
"src.run_config.target = ws.get_default_compute_target(type=\"CPU\").name\n",
|
||||
"\n",
|
||||
"# signal that you want to use AmlCompute to execute script.\n",
|
||||
"run_config.target = default_compute_target.name\n",
|
||||
"\n",
|
||||
"# enable Docker \n",
|
||||
"run_config.environment.docker.enabled = True\n",
|
||||
"\n",
|
||||
"# set Docker base image to the default CPU-based image\n",
|
||||
"run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE\n",
|
||||
"\n",
|
||||
"# use conda_dependencies.yml to create a conda environment in the Docker image for execution\n",
|
||||
"run_config.environment.python.user_managed_dependencies = False\n",
|
||||
"\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
|
||||
"\n",
|
||||
"# Now submit a run on AmlCompute\n",
|
||||
"from azureml.core.script_run_config import ScriptRunConfig\n",
|
||||
"\n",
|
||||
"script_run_config = ScriptRunConfig(source_directory=project_folder,\n",
|
||||
" script='train.py',\n",
|
||||
" run_config=run_config)\n",
|
||||
"\n",
|
||||
"run = experiment.submit(script_run_config)\n",
|
||||
"# Set environment\n",
|
||||
"src.run_config.environment = myenv"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Submit run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run = experiment.submit(src)\n",
|
||||
"\n",
|
||||
"# Show run details\n",
|
||||
"run"
|
||||
@@ -292,27 +310,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to AmlCompute target created in previous step\n",
|
||||
"run_config.target = cpu_cluster.name\n",
|
||||
"\n",
|
||||
"# enable Docker \n",
|
||||
"run_config.environment.docker.enabled = True\n",
|
||||
"\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
|
||||
"\n",
|
||||
"from azureml.core import Run\n",
|
||||
"from azureml.core import ScriptRunConfig\n",
|
||||
"\n",
|
||||
"src = ScriptRunConfig(source_directory=project_folder, \n",
|
||||
" script='train.py', \n",
|
||||
" run_config=run_config) \n",
|
||||
"# Set compute target to the one created in previous step\n",
|
||||
"src.run_config.target = cpu_cluster.name\n",
|
||||
" \n",
|
||||
"run = experiment.submit(config=src)\n",
|
||||
"run"
|
||||
]
|
||||
@@ -397,27 +397,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to AmlCompute target created in previous step\n",
|
||||
"run_config.target = cpu_cluster.name\n",
|
||||
"\n",
|
||||
"# enable Docker \n",
|
||||
"run_config.environment.docker.enabled = True\n",
|
||||
"\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
|
||||
"\n",
|
||||
"from azureml.core import Run\n",
|
||||
"from azureml.core import ScriptRunConfig\n",
|
||||
"\n",
|
||||
"src = ScriptRunConfig(source_directory=project_folder, \n",
|
||||
" script='train.py', \n",
|
||||
" run_config=run_config) \n",
|
||||
"# Set compute target to the one created in previous step\n",
|
||||
"src.run_config.target = cpu_cluster.name\n",
|
||||
" \n",
|
||||
"run = experiment.submit(config=src)\n",
|
||||
"run"
|
||||
]
|
||||
|
||||
@@ -170,15 +170,15 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core import Environment\n",
|
||||
"\n",
|
||||
"# Editing a run configuration property on-fly.\n",
|
||||
"run_config_user_managed = RunConfiguration()\n",
|
||||
"user_managed_env = Environment(\"user-managed-env\")\n",
|
||||
"\n",
|
||||
"run_config_user_managed.environment.python.user_managed_dependencies = True\n",
|
||||
"user_managed_env.python.user_managed_dependencies = True\n",
|
||||
"\n",
|
||||
"# You can choose a specific Python environment by pointing to a Python path \n",
|
||||
"#run_config.environment.python.interpreter_path = '/home/johndoe/miniconda3/envs/myenv/bin/python'"
|
||||
"#user_managed_env.python.interpreter_path = '/home/johndoe/miniconda3/envs/myenv/bin/python'"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -197,7 +197,16 @@
|
||||
"source": [
|
||||
"from azureml.core import ScriptRunConfig\n",
|
||||
"\n",
|
||||
"src = ScriptRunConfig(source_directory='./', script='train.py', run_config=run_config_user_managed)\n",
|
||||
"src = ScriptRunConfig(source_directory='./', script='train.py')\n",
|
||||
"src.run_config.environment = user_managed_env"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run = exp.submit(src)"
|
||||
]
|
||||
},
|
||||
@@ -277,13 +286,13 @@
|
||||
"source": [
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"run_config_system_managed = RunConfiguration()\n",
|
||||
"system_managed_env = Environment(\"system-managed-env\")\n",
|
||||
"\n",
|
||||
"run_config_system_managed.environment.python.user_managed_dependencies = False\n",
|
||||
"system_managed_env.python.user_managed_dependencies = False\n",
|
||||
"\n",
|
||||
"# Specify conda dependencies with scikit-learn\n",
|
||||
"cd = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
|
||||
"run_config_system_managed.environment.python.conda_dependencies = cd"
|
||||
"system_managed_env.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -302,7 +311,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"src = ScriptRunConfig(source_directory=\"./\", script='train.py', run_config=run_config_system_managed)\n",
|
||||
"src.run_config.environment = system_managed_env\n",
|
||||
"run = exp.submit(src)"
|
||||
]
|
||||
},
|
||||
@@ -371,18 +380,16 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run_config_docker = RunConfiguration()\n",
|
||||
"run_config_docker.environment.python.user_managed_dependencies = False\n",
|
||||
"run_config_docker.environment.docker.enabled = True\n",
|
||||
"docker_env = Environment(\"docker-env\")\n",
|
||||
"\n",
|
||||
"docker_env.python.user_managed_dependencies = False\n",
|
||||
"docker_env.docker.enabled = True\n",
|
||||
"\n",
|
||||
"# use the default CPU-based Docker image from Azure ML\n",
|
||||
"run_config_docker.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE # Reference Docker image\n",
|
||||
"print(docker_env.docker.base_image)\n",
|
||||
"\n",
|
||||
"# Specify conda dependencies with scikit-learn\n",
|
||||
"cd = CondaDependencies.create(conda_packages=['scikit-learn'])\n",
|
||||
"run_config_docker.environment.python.conda_dependencies = cd\n",
|
||||
"\n",
|
||||
"src = ScriptRunConfig(source_directory=\"./\", script='train.py', run_config=run_config_docker)"
|
||||
"docker_env.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -402,6 +409,8 @@
|
||||
"source": [
|
||||
"import subprocess\n",
|
||||
"\n",
|
||||
"src.run_config.environment = docker_env\n",
|
||||
"\n",
|
||||
"# Check if Docker is installed and Linux containers are enabled\n",
|
||||
"if subprocess.run(\"docker -v\", shell=True).returncode == 0:\n",
|
||||
" out = subprocess.check_output(\"docker system info\", shell=True).decode('ascii')\n",
|
||||
@@ -657,7 +666,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.8"
|
||||
"version": "3.6.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -280,20 +280,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"# create a new RunConfig object\n",
|
||||
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to the Linux DSVM\n",
|
||||
"conda_run_config.target = attached_dsvm_compute.name\n",
|
||||
"\n",
|
||||
"# set the data reference of the run configuration\n",
|
||||
"conda_run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
|
||||
"conda_env = Environment(\"conda-env\")\n",
|
||||
"conda_env.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -306,18 +297,14 @@
|
||||
"\n",
|
||||
"src = ScriptRunConfig(source_directory=script_folder, \n",
|
||||
" script='train.py', \n",
|
||||
" run_config=conda_run_config, \n",
|
||||
" # pass the datastore reference as a parameter to the training script\n",
|
||||
" arguments=['--data-folder', str(ds.as_download())] \n",
|
||||
" ) \n",
|
||||
"run = exp.submit(config=src)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run)."
|
||||
"\n",
|
||||
"src.run_config.framework = \"python\"\n",
|
||||
"src.run_config.environment = conda_env\n",
|
||||
"src.run_config.target = attached_dsvm_compute.name\n",
|
||||
"src.run_config.data_references = {ds.name: dr}"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -326,9 +313,18 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run = exp.submit(config=src)\n",
|
||||
"\n",
|
||||
"run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -359,17 +355,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# create a new RunConfig object\n",
|
||||
"vm_run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"\n",
|
||||
"# Set compute target to the Linux DSVM\n",
|
||||
"vm_run_config.target = attached_dsvm_compute.name\n",
|
||||
"\n",
|
||||
"# set the data reference of the run coonfiguration\n",
|
||||
"conda_run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"# Let system know that you will configure the Python environment yourself.\n",
|
||||
"vm_run_config.environment.python.user_managed_dependencies = True"
|
||||
"conda_env.python.user_managed_dependencies = True"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -385,11 +371,6 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"src = ScriptRunConfig(source_directory=script_folder, \n",
|
||||
" script='train.py', \n",
|
||||
" run_config=vm_run_config,\n",
|
||||
" # pass the datastore reference as a parameter to the training script\n",
|
||||
" arguments=['--data-folder', str(ds.as_download())])\n",
|
||||
"run = exp.submit(config=src)\n",
|
||||
"run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
@@ -398,7 +379,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can choose to SSH into the VM and install Azure ML SDK, and any other missing dependencies, in that Python environment. For demonstration purposes, we simply are going to use another script `train2.py` that doesn't have azureml dependencies, and submit it instead."
|
||||
"You can choose to SSH into the VM and install Azure ML SDK, and any other missing dependencies, in that Python environment. For demonstration purposes, we simply are going to use another script `train2.py` that doesn't have azureml or data store dependencies, and submit it instead."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -411,7 +392,10 @@
|
||||
"shutil.copy('./train2.py', os.path.join(script_folder, 'train2.py'))\n",
|
||||
"\n",
|
||||
"with open(os.path.join(script_folder, './train2.py'), 'r') as training_script:\n",
|
||||
" print(training_script.read())"
|
||||
" print(training_script.read())\n",
|
||||
" \n",
|
||||
"src.run_config.data_references = {}\n",
|
||||
"src.script = \"train2.py\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -427,10 +411,8 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"src = ScriptRunConfig(source_directory=script_folder, \n",
|
||||
" script='train2.py', \n",
|
||||
" run_config=vm_run_config)\n",
|
||||
"run = exp.submit(config=src)\n",
|
||||
"\n",
|
||||
"run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
@@ -464,24 +446,10 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load the \"cpu-dsvm.runconfig\" file (created by the above attach operation) in memory\n",
|
||||
"docker_run_config = RunConfiguration(framework=\"python\")\n",
|
||||
"conda_env.docker.enabled = True\n",
|
||||
"conda_env.python.user_managed_dependencies = False\n",
|
||||
"\n",
|
||||
"# Set compute target to the Linux DSVM\n",
|
||||
"docker_run_config.target = attached_dsvm_compute.name\n",
|
||||
"\n",
|
||||
"# Use Docker in the remote VM\n",
|
||||
"docker_run_config.environment.docker.enabled = True\n",
|
||||
"\n",
|
||||
"# Use CPU base image from DockerHub\n",
|
||||
"docker_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
|
||||
"print('Base Docker image is:', docker_run_config.environment.docker.base_image)\n",
|
||||
"\n",
|
||||
"# set the data reference of the run coonfiguration\n",
|
||||
"docker_run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"docker_run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'])"
|
||||
"print('Base Docker image is:', conda_env.docker.base_image)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -498,20 +466,11 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"src = ScriptRunConfig(source_directory=script_folder, \n",
|
||||
" script='train.py', \n",
|
||||
" run_config=docker_run_config,\n",
|
||||
" # pass the datastore reference as a parameter to the training script\n",
|
||||
" arguments=['--data-folder', str(ds.as_download())])\n",
|
||||
"run = exp.submit(config=src)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"src.script = \"train.py\"\n",
|
||||
"src.run_config.data_references = {ds.name: dr}\n",
|
||||
"\n",
|
||||
"run = exp.submit(config=src)\n",
|
||||
"\n",
|
||||
"run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
@@ -525,20 +484,20 @@
|
||||
"\n",
|
||||
"```python\n",
|
||||
"# use an image available in Docker Hub without authentication\n",
|
||||
"run_config_docker.environment.docker.base_image = \"continuumio/miniconda3\"\n",
|
||||
"conda_env.docker.base_image = \"continuumio/miniconda3\"\n",
|
||||
"\n",
|
||||
"# or, use an image available in a private Azure Container Registry\n",
|
||||
"run_config_docker.environment.docker.base_image = \"mycustomimage:1.0\"\n",
|
||||
"run_config_docker.environment.docker.base_image_registry.address = \"myregistry.azurecr.io\"\n",
|
||||
"run_config_docker.environment.docker.base_image_registry.username = \"username\"\n",
|
||||
"run_config_docker.environment.docker.base_image_registry.password = \"password\"\n",
|
||||
"conda_env.docker.base_image = \"mycustomimage:1.0\"\n",
|
||||
"conda_env.docker.base_image_registry.address = \"myregistry.azurecr.io\"\n",
|
||||
"conda_env.docker.base_image_registry.username = \"username\"\n",
|
||||
"conda_env.docker.base_image_registry.password = \"password\"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"When you are using a custom Docker image, you might already have your environment setup properly in a Python environment in the Docker image. In that case, you can skip specifying conda dependencies, and just use `user_managed_dependencies` option instead:\n",
|
||||
"```python\n",
|
||||
"run_config_docker.environment.python.user_managed_dependencies = True\n",
|
||||
"conda_env.python.user_managed_dependencies = True\n",
|
||||
"# path to the Python environment in the custom Docker image\n",
|
||||
"run_config.environment.python.interpreter_path = '/opt/conda/bin/python'\n",
|
||||
"conda_env.python.interpreter_path = '/opt/conda/bin/python'\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
@@ -640,7 +599,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.6"
|
||||
"version": "3.6.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -0,0 +1,8 @@
|
||||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License
|
||||
|
||||
# Very simple script to demonstrate run in environment
|
||||
# Print message passed in as environment variable
|
||||
import os
|
||||
|
||||
print(os.environ.get("MESSAGE"))
|
||||
@@ -60,12 +60,12 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"print(azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
@@ -75,6 +75,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"ws.get_details()"
|
||||
]
|
||||
@@ -176,7 +177,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"runconfig = ScriptRunConfig(source_directory=\"example\", script=\"example.py\")\n",
|
||||
"runconfig = ScriptRunConfig(source_directory=\".\", script=\"example.py\")\n",
|
||||
"runconfig.run_config.target = \"local\"\n",
|
||||
"runconfig.run_config.environment = myenv\n",
|
||||
"run = myexp.submit(config=runconfig)\n",
|
||||
|
||||
11
how-to-use-azureml/using-mlflow/README.md
Normal file
11
how-to-use-azureml/using-mlflow/README.md
Normal file
@@ -0,0 +1,11 @@
|
||||
## Use MLflow with Azure Machine Learning service (Preview)
|
||||
|
||||
[MLflow](https://mlflow.org/) is an open-source platform for tracking machine learning experiments and managing models. You can use MLflow logging APIs with Azure Machine Learning service: the metrics and artifacts are logged to your Azure ML Workspace.
|
||||
|
||||
Try out the sample notebooks:
|
||||
|
||||
* [Use MLflow with Azure Machine Learning for local training run](./train-local/train-local.ipynb)
|
||||
* [Use MLflow with Azure Machine Learning for remote training run](./train-remote/train-remote.ipynb)
|
||||
* [Deploy Model as Azure Machine Learning web service using MLflow](./deploy-model/deploy-model.ipynb)
|
||||
|
||||

|
||||
322
how-to-use-azureml/using-mlflow/deploy-model/deploy-model.ipynb
Normal file
322
how-to-use-azureml/using-mlflow/deploy-model/deploy-model.ipynb
Normal file
@@ -0,0 +1,322 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy Model as Azure Machine Learning Web Service using MLflow\n",
|
||||
"\n",
|
||||
"This example shows you how to use mlflow together with Azure Machine Learning services for deploying a model as a web service. You'll learn how to:\n",
|
||||
"\n",
|
||||
" 1. Retrieve a previously trained scikit-learn model\n",
|
||||
" 2. Create a Docker image from the model\n",
|
||||
" 3. Deploy the model as a web service on Azure Container Instance\n",
|
||||
" 4. Make a scoring request against the web service.\n",
|
||||
"\n",
|
||||
"## Prerequisites and Set-up\n",
|
||||
"\n",
|
||||
"This notebook requires you to first complete the [Use MLflow with Azure Machine Learning for Local Training Run](../train-local/train-local.ipnyb) or [Use MLflow with Azure Machine Learning for Remote Training Run](../train-remote/train-remote.ipnyb) notebook, so as to have an experiment run with uploaded model in your Azure Machine Learning Workspace.\n",
|
||||
"\n",
|
||||
"Also install following packages if you haven't already\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"pip install azureml-mlflow pandas\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Then, import necessary packages:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import mlflow\n",
|
||||
"import azureml.mlflow\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Connect to workspace and set MLflow tracking URI\n",
|
||||
"\n",
|
||||
"Setting the tracking URI is required for retrieving the model and creating an image using the MLflow APIs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Retrieve model from previous run\n",
|
||||
"\n",
|
||||
"Let's retrieve the experiment from training notebook, and list the runs within that experiment."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"experiment_name = \"experiment-with-mlflow\"\n",
|
||||
"exp = ws.experiments[experiment_name]\n",
|
||||
"\n",
|
||||
"runs = list(exp.get_runs())\n",
|
||||
"runs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then, let's select the most recent training run and find its ID. You also need to specify the path in run history where the model was saved. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"runid = runs[0].id\n",
|
||||
"model_save_path = \"model\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create Docker image\n",
|
||||
"\n",
|
||||
"To create a Docker image with Azure Machine Learning for Model Management, use ```mlflow.azureml.build_image``` method. Specify the model path, your workspace, run ID and other parameters.\n",
|
||||
"\n",
|
||||
"MLflow automatically recognizes the model framework as scikit-learn, and creates the scoring logic and includes library dependencies for you.\n",
|
||||
"\n",
|
||||
"Note that the image creation can take several minutes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import mlflow.azureml\n",
|
||||
"\n",
|
||||
"azure_image, azure_model = mlflow.azureml.build_image(model_uri=\"runs:/{}/{}\".format(runid, model_save_path),\n",
|
||||
" workspace=ws,\n",
|
||||
" model_name='diabetes-sklearn-model',\n",
|
||||
" image_name='diabetes-sklearn-image',\n",
|
||||
" synchronous=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deploy web service\n",
|
||||
"\n",
|
||||
"Let's use Azure Machine Learning SDK to deploy the image as a web service. \n",
|
||||
"\n",
|
||||
"First, specify the deployment configuration. Azure Container Instance is a suitable choice for a quick dev-test deployment, while Azure Kubernetes Service is suitable for scalable production deployments.\n",
|
||||
"\n",
|
||||
"Then, deploy the image using Azure Machine Learning SDK's ```deploy_from_image``` method.\n",
|
||||
"\n",
|
||||
"Note that the deployment can take several minutes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import AciWebservice, Webservice\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"aci_config = AciWebservice.deploy_configuration(cpu_cores=1, \n",
|
||||
" memory_gb=1, \n",
|
||||
" tags={\"method\" : \"sklearn\"}, \n",
|
||||
" description='Diabetes model',\n",
|
||||
" location='eastus2')\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Deploy the image to Azure Container Instances (ACI) for real-time serving\n",
|
||||
"webservice = Webservice.deploy_from_image(\n",
|
||||
" image=azure_image, workspace=ws, name=\"diabetes-model-1\", deployment_config=aci_config)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"webservice.wait_for_deployment(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Make a scoring request\n",
|
||||
"\n",
|
||||
"Let's take the first few rows of test data and score them using the web service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"test_rows = [\n",
|
||||
" [0.01991321, 0.05068012, 0.10480869, 0.07007254, -0.03596778,\n",
|
||||
" -0.0266789 , -0.02499266, -0.00259226, 0.00371174, 0.04034337],\n",
|
||||
" [-0.01277963, -0.04464164, 0.06061839, 0.05285819, 0.04796534,\n",
|
||||
" 0.02937467, -0.01762938, 0.03430886, 0.0702113 , 0.00720652],\n",
|
||||
" [ 0.03807591, 0.05068012, 0.00888341, 0.04252958, -0.04284755,\n",
|
||||
" -0.02104223, -0.03971921, -0.00259226, -0.01811827, 0.00720652]]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"MLflow-based web service for scikit-learn model requires the data to be converted to Pandas DataFrame, and then serialized as JSON. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"import pandas as pd\n",
|
||||
"\n",
|
||||
"test_rows_as_json = pd.DataFrame(test_rows).to_json(orient=\"split\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's pass the conveted and serialized data to web service to get the predictions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"predictions = webservice.run(test_rows_as_json)\n",
|
||||
"\n",
|
||||
"print(predictions)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can use the web service's scoring URI to make a raw HTTP request"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"webservice.scoring_uri"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can diagnose the web service using ```get_logs``` method."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"webservice.get_logs()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Next Steps\n",
|
||||
"\n",
|
||||
"Learn about [model management and inferencing in Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-model-management-and-deployment)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "rastala"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
248
how-to-use-azureml/using-mlflow/train-local/train-local.ipynb
Normal file
248
how-to-use-azureml/using-mlflow/train-local/train-local.ipynb
Normal file
@@ -0,0 +1,248 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use MLflow with Azure Machine Learning for Local Training Run\n",
|
||||
"\n",
|
||||
"This example shows you how to use mlflow tracking APIs together with Azure Machine Learning services for storing your metrics and artifacts, from local Notebook run. You'll learn how to:\n",
|
||||
"\n",
|
||||
" 1. Set up MLflow tracking URI so as to use Azure ML\n",
|
||||
" 2. Create experiment\n",
|
||||
" 3. Train a model on your local computer while logging metrics and artifacts\n",
|
||||
" 4. View your experiment within your Azure ML Workspace in Azure Portal.\n",
|
||||
"\n",
|
||||
"## Prerequisites and Set-up\n",
|
||||
"\n",
|
||||
"Make sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met.\n",
|
||||
"\n",
|
||||
"Install azureml-mlflow package before running this notebook. Note that mlflow itself gets installed as dependency if you haven't installed it yet.\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"pip install azureml-mlflow\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"This example also uses scikit-learn and matplotlib packages. Install them:\n",
|
||||
"```\n",
|
||||
"pip install scikit-learn matplotlib\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Then, import necessary packages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import mlflow\n",
|
||||
"import mlflow.sklearn\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core import Workspace\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set tracking URI\n",
|
||||
"\n",
|
||||
"Set the MLflow tracking URI to point to your Azure ML Workspace. The subsequent logging calls from MLflow APIs will go to Azure ML services and will be tracked under your Workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws = Workspace.from_config()\n",
|
||||
"\n",
|
||||
"mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create Experiment\n",
|
||||
"\n",
|
||||
"In both MLflow and Azure ML, training runs are grouped into experiments. Let's create one for our experimentation."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"experiment_name = \"experiment-with-mlflow\"\n",
|
||||
"mlflow.set_experiment(experiment_name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create training and test data set\n",
|
||||
"\n",
|
||||
"This example uses diabetes dataset to build a simple regression model. Let's load the dataset and split it into training and test sets."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import numpy as np\n",
|
||||
"from sklearn.datasets import load_diabetes\n",
|
||||
"from sklearn.linear_model import Ridge\n",
|
||||
"from sklearn.metrics import mean_squared_error\n",
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"\n",
|
||||
"X, y = load_diabetes(return_X_y = True)\n",
|
||||
"columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']\n",
|
||||
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)\n",
|
||||
"data = {\n",
|
||||
" \"train\":{\"X\": X_train, \"y\": y_train}, \n",
|
||||
" \"test\":{\"X\": X_test, \"y\": y_test}\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"print (\"Data contains\", len(data['train']['X']), \"training samples and\",len(data['test']['X']), \"test samples\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Train while logging metrics and artifacts\n",
|
||||
"\n",
|
||||
"Next, start a mlflow run to train a scikit-learn regression model. Note that the training script has been instrumented using MLflow to:\n",
|
||||
" * Log model hyperparameter alpha value\n",
|
||||
" * Log mean squared error against test set\n",
|
||||
" * Save the scikit-learn based regression model produced by training\n",
|
||||
" * Save an image that shows actuals vs predictions against test set.\n",
|
||||
" \n",
|
||||
"These metrics and artifacts have been recorded to your Azure ML Workspace; in the next step you'll learn how to view them."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Create a run object in the experiment\n",
|
||||
"model_save_path = \"model\"\n",
|
||||
"\n",
|
||||
"with mlflow.start_run() as run:\n",
|
||||
" # Log the algorithm parameter alpha to the run\n",
|
||||
" mlflow.log_metric('alpha', 0.03)\n",
|
||||
" # Create, fit, and test the scikit-learn Ridge regression model\n",
|
||||
" regression_model = Ridge(alpha=0.03)\n",
|
||||
" regression_model.fit(data['train']['X'], data['train']['y'])\n",
|
||||
" preds = regression_model.predict(data['test']['X'])\n",
|
||||
"\n",
|
||||
" # Log mean squared error\n",
|
||||
" print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))\n",
|
||||
" mlflow.log_metric('mse', mean_squared_error(data['test']['y'], preds))\n",
|
||||
" \n",
|
||||
" # Save the model to the outputs directory for capture\n",
|
||||
" mlflow.sklearn.log_model(regression_model,model_save_path)\n",
|
||||
" \n",
|
||||
" # Plot actuals vs predictions and save the plot within the run\n",
|
||||
" fig = plt.figure(1)\n",
|
||||
" idx = np.argsort(data['test']['y'])\n",
|
||||
" plt.plot(data['test']['y'][idx],preds[idx])\n",
|
||||
" fig.savefig(\"actuals_vs_predictions.png\")\n",
|
||||
" mlflow.log_artifact(\"actuals_vs_predictions.png\") "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can open the report page for your experiment and runs within it from Azure Portal.\n",
|
||||
"\n",
|
||||
"Select one of the runs to view the metrics, and the plot you saved. The saved scikit-learn model appears under **outputs** tab."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws.experiments[experiment_name]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Next steps\n",
|
||||
"\n",
|
||||
"Try out these notebooks to learn more about MLflow-Azure Machine Learning integration:\n",
|
||||
"\n",
|
||||
" * [Train a model using remote compute on Azure Cloud](../train-on-remote/train-on-remote.ipynb)\n",
|
||||
" * [Deploy the model as a web service](../deploy-model/deploy-model.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "rastala"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
318
how-to-use-azureml/using-mlflow/train-remote/train-remote.ipynb
Normal file
318
how-to-use-azureml/using-mlflow/train-remote/train-remote.ipynb
Normal file
@@ -0,0 +1,318 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use MLflow with Azure Machine Learning for Remote Training Run\n",
|
||||
"\n",
|
||||
"This example shows you how to use MLflow tracking APIs together with Azure Machine Learning services for storing your metrics and artifacts, from local Notebook run. You'll learn how to:\n",
|
||||
"\n",
|
||||
" 1. Set up MLflow tracking URI so as to use Azure ML\n",
|
||||
" 2. Create experiment\n",
|
||||
" 3. Train a model on Machine Learning Compute while logging metrics and artifacts\n",
|
||||
" 4. View your experiment within your Azure ML Workspace in Azure Portal."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"Make sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set-up\n",
|
||||
"\n",
|
||||
"Check Azure ML SDK version installed on your computer, and then connect to your Workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"from azureml.core import Workspace, Experiment\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's also create a Machine Learning Compute cluster for submitting the remote run. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"\n",
|
||||
"# Choose a name for your CPU cluster\n",
|
||||
"cpu_cluster_name = \"cpu-cluster\"\n",
|
||||
"\n",
|
||||
"# Verify that cluster does not exist already\n",
|
||||
"try:\n",
|
||||
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
|
||||
" print(\"Found existing cpu-cluster\")\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print(\"Creating new cpu-cluster\")\n",
|
||||
" \n",
|
||||
" # Specify the configuration for the new cluster\n",
|
||||
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_D2_V2\",\n",
|
||||
" min_nodes=0,\n",
|
||||
" max_nodes=1)\n",
|
||||
"\n",
|
||||
" # Create the cluster with the specified name and configuration\n",
|
||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
|
||||
" \n",
|
||||
" # Wait for the cluster to complete, show the output log\n",
|
||||
" cpu_cluster.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create Azure ML Experiment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following steps show how to submit a training Python script to a cluster as an Azure ML run, while logging happens through MLflow APIs to your Azure ML Workspace. Let's first create an experiment to hold the training runs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Experiment\n",
|
||||
"\n",
|
||||
"experiment_name = \"experiment-with-mlflow\"\n",
|
||||
"exp = Experiment(workspace=ws, name=experiment_name)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Instrument remote training script using MLflow"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's use [*train_diabetes.py*](train_diabetes.py) to train a regression model against diabetes dataset as the example. Note that the training script uses mlflow.start_run() to start logging, and then logs metrics, saves the trained scikit-learn model, and saves a plot as an artifact.\n",
|
||||
"\n",
|
||||
"Run following command to view the script file. Notice the mlflow logging statements, and also notice that the script doesn't have explicit dependencies on azureml library."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"training_script = 'train_diabetes.py'\n",
|
||||
"with open(training_script, 'r') as f:\n",
|
||||
" print(f.read())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Submit Run to Cluster \n",
|
||||
"\n",
|
||||
"Let's submit the run to cluster. When running on the remote cluster as submitted run, Azure ML sets the MLflow tracking URI to point to your Azure ML Workspace, so that the metrics and artifacts are automatically logged there.\n",
|
||||
"\n",
|
||||
"Note that you have to specify the packages your script depends on, including *azureml-mlflow* that implicitly enables the MLflow logging to Azure ML. \n",
|
||||
"\n",
|
||||
"First, create a environment with Docker enable and required package dependencies specified."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"mlflow"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"\n",
|
||||
"env = Environment(name=\"mlflow-env\")\n",
|
||||
"\n",
|
||||
"env.docker.enabled = True\n",
|
||||
"\n",
|
||||
"# Specify conda dependencies with scikit-learn and temporary pointers to mlflow extensions\n",
|
||||
"cd = CondaDependencies.create(\n",
|
||||
" conda_packages=[\"scikit-learn\", \"matplotlib\"],\n",
|
||||
" pip_packages=[\"azureml-mlflow\", \"numpy\"]\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"env.python.conda_dependencies = cd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, specify a script run configuration that includes the training script, environment and CPU cluster created earlier."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import ScriptRunConfig\n",
|
||||
"\n",
|
||||
"src = ScriptRunConfig(source_directory=\".\", script=training_script)\n",
|
||||
"src.run_config.environment = env\n",
|
||||
"src.run_config.target = cpu_cluster.name"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, submit the run. Note that the first instance of the run typically takes longer as the Docker-based environment is created, several minutes. Subsequent runs reuse the image and are faster."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run = exp.submit(src)\n",
|
||||
"run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can navigate to your Azure ML Workspace at Azure Portal to view the run metrics and artifacts. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run.id"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can also get the metrics and bring them to your local notebook, and view the details of the run."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run.get_metrics()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ws.get_details()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Next steps\n",
|
||||
"\n",
|
||||
" * [Deploy the model as a web service](../deploy-model/deploy-model.ipynb)\n",
|
||||
" * [Learn more about Azure Machine Learning compute options](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "rastala"
|
||||
}
|
||||
],
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
"name": "python36"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -0,0 +1,46 @@
|
||||
# Copyright (c) Microsoft. All rights reserved.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import numpy as np
|
||||
from sklearn.datasets import load_diabetes
|
||||
from sklearn.linear_model import Ridge
|
||||
from sklearn.metrics import mean_squared_error
|
||||
from sklearn.model_selection import train_test_split
|
||||
import mlflow
|
||||
import mlflow.sklearn
|
||||
|
||||
import matplotlib
|
||||
matplotlib.use('Agg')
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
with mlflow.start_run():
|
||||
X, y = load_diabetes(return_X_y=True)
|
||||
columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']
|
||||
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
|
||||
data = {
|
||||
"train": {"X": X_train, "y": y_train},
|
||||
"test": {"X": X_test, "y": y_test}}
|
||||
|
||||
mlflow.log_metric("Training samples", len(data['train']['X']))
|
||||
mlflow.log_metric("Test samples", len(data['test']['X']))
|
||||
|
||||
# Log the algorithm parameter alpha to the run
|
||||
mlflow.log_metric('alpha', 0.03)
|
||||
# Create, fit, and test the scikit-learn Ridge regression model
|
||||
regression_model = Ridge(alpha=0.03)
|
||||
regression_model.fit(data['train']['X'], data['train']['y'])
|
||||
preds = regression_model.predict(data['test']['X'])
|
||||
|
||||
# Log mean squared error
|
||||
print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds))
|
||||
mlflow.log_metric('mse', mean_squared_error(data['test']['y'], preds))
|
||||
|
||||
# Save the model to the outputs directory for capture
|
||||
mlflow.sklearn.log_model(regression_model, "model")
|
||||
|
||||
# Plot actuals vs predictions and save the plot within the run
|
||||
fig = plt.figure(1)
|
||||
idx = np.argsort(data['test']['y'])
|
||||
plt.plot(data['test']['y'][idx], preds[idx])
|
||||
fig.savefig("actuals_vs_predictions.png")
|
||||
mlflow.log_artifact("actuals_vs_predictions.png")
|
||||
9
how-to-use-azureml/work-with-data/README.md
Normal file
9
how-to-use-azureml/work-with-data/README.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# Work With Data Using Azure Machine Learning Service
|
||||
|
||||
Azure Machine Learning Datasets (preview) make it easier to access and work with your data. Datasets manage data in various scenarios such as model training and pipeline creation. Using the Azure Machine Learning SDK, you can access underlying storage, explore and prepare data, manage the life cycle of different Dataset definitions, and compare between Datasets used in training and in production.
|
||||
|
||||
- For an example of using Datasets, see the [sample](datasets).
|
||||
- For advanced data preparation examples, see [dataprep](dataprep).
|
||||
|
||||
|
||||

|
||||
@@ -31,6 +31,35 @@ If you have any questions or feedback, send us an email at: [askamldataprep@micr
|
||||
|
||||
## Release Notes
|
||||
|
||||
### 2019-05-28 (version 1.1.4)
|
||||
|
||||
New features
|
||||
- You can now use the following expression language functions to extract and parse datetime values into new columns.
|
||||
- `RegEx.extract_record()` extracts datetime elements into a new column.
|
||||
- `create_datetime()` creates datetime objects from separate datetime elements.
|
||||
- When calling `get_profile()`, you can now see that quantile columns are labeled as (est.) to clearly indicate that the values are approximations.
|
||||
- You can now use ** globbing when reading from Azure Blob Storage.
|
||||
- e.g. `dprep.read_csv(path='https://yourblob.blob.core.windows.net/yourcontainer/**/data/*.csv')`
|
||||
|
||||
Bug fixes
|
||||
- Fixed a bug related to reading a Parquet file from a remote source (Azure Blob).
|
||||
|
||||
### 2019-05-08 (version 1.1.3)
|
||||
|
||||
New features
|
||||
- Added support to read from a PostgresSQL database, either by calling `read_postgresql` or using a Datastore.
|
||||
- See examples in how-to guides:
|
||||
- [Data Ingestion notebook](https://aka.ms/aml-data-prep-ingestion-nb)
|
||||
- [Datastore notebook](https://aka.ms/aml-data-prep-datastore-nb)
|
||||
|
||||
Bug fixes and improvements
|
||||
- Fixed issues with column type conversion:
|
||||
- Now correctly converts a boolean or numeric column to a boolean column.
|
||||
- Now does not fail when attempting to set a date column to be date type.
|
||||
- Improved JoinType types and accompanying reference documentation. When joining two dataflows, you can now specify one of these types of join:
|
||||
- NONE, MATCH, INNER, UNMATCHLEFT, LEFTANTI, LEFTOUTER, UNMATCHRIGHT, RIGHTANTI, RIGHTOUTER, FULLANTI, FULL.
|
||||
- Improved data type inferencing to recognize more date formats.
|
||||
|
||||
### 2019-04-17 (version 1.1.2)
|
||||
|
||||
Note: Data Prep Python SDK will no longer install `numpy` and `pandas` packages. See [updated installation instructions](https://aka.ms/aml-data-prep-installation).
|
||||
@@ -222,3 +251,5 @@ Bug fixes
|
||||
IMPORTANT: Please read the notice and find out more about this NYC Taxi and Limousine Commission dataset here: http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml
|
||||
|
||||
IMPORTANT: Please read the notice and find out more about this Chicago Police Department dataset here: https://catalog.data.gov/dataset/crimes-2001-to-present-398a4
|
||||
|
||||

|
||||
|
||||
@@ -477,6 +477,13 @@
|
||||
"dflow_path = path.join(mkdtemp(), \"new_york_taxi.dprep\")\n",
|
||||
"combined_df.save(file_path=dflow_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -94,9 +94,16 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"spark_df = df.take(5).to_pandas_dataframe()\n",
|
||||
"spark_df = df.to_spark_dataframe()\n",
|
||||
"spark_df.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
1001
how-to-use-azureml/work-with-data/dataprep/data/crime-full.csv
Normal file
1001
how-to-use-azureml/work-with-data/dataprep/data/crime-full.csv
Normal file
File diff suppressed because it is too large
Load Diff
4415
how-to-use-azureml/work-with-data/dataprep/data/large_dflow.json
Normal file
4415
how-to-use-azureml/work-with-data/dataprep/data/large_dflow.json
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,11 @@
|
||||
Stream Path
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/01/train.csv
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/02/train.csv
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/03/train.csv
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/04/train.csv
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/05/train.csv
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/06/train.csv
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/07/train.csv
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/08/train.csv
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/09/train.csv
|
||||
https://dataset.blob.core.windows.net/blobstore/container/2019/01/10/train.csv
|
||||
|
@@ -139,6 +139,51 @@
|
||||
"dflow_to_lower.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### `RegEx.extract_record()`\n",
|
||||
"Using the `RegEx.extract_record()` expression, add a new record column \"Stream Date Record\", which contains the name capturing groups in the regex with value."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"dflow_regex_extract_record = dprep.auto_read_file('../data/stream-path.csv')\n",
|
||||
"regex = dprep.RegEx('\\/(?<year>\\d{4})\\/(?<month>\\d{2})\\/(?<day>\\d{2})\\/')\n",
|
||||
"dflow_regex_extract_record = dflow_regex_extract_record.add_column(new_column_name='Stream Date Record',\n",
|
||||
" prior_column='Stream Path',\n",
|
||||
" expression=regex.extract_record(dflow_regex_extract_record['Stream Path']))\n",
|
||||
"dflow_regex_extract_record.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### `create_datetime()`\n",
|
||||
"Using the `create_datetime()` expression, add a new column \"Stream Date\", which contains datetime values constructed from year, month, day values extracted from a record column \"Stream Date Record\"."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"year = dprep.col('year', dflow_regex_extract_record['Stream Date Record'])\n",
|
||||
"month = dprep.col('month', dflow_regex_extract_record['Stream Date Record'])\n",
|
||||
"day = dprep.col('day', dflow_regex_extract_record['Stream Date Record'])\n",
|
||||
"dflow_create_datetime = dflow_regex_extract_record.add_column(new_column_name='Stream Date',\n",
|
||||
" prior_column='Stream Date Record',\n",
|
||||
" expression=dprep.create_datetime(year, month, day))\n",
|
||||
"dflow_create_datetime.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -39,6 +39,7 @@
|
||||
"[Read Part Files Using Globbing](#globbing)<br>\n",
|
||||
"[Read JSON](#json)<br>\n",
|
||||
"[Read SQL](#sql)<br>\n",
|
||||
"[Read PostgreSQL](#postgresql)<br>\n",
|
||||
"[Read From Azure Blob](#azure-blob)<br>\n",
|
||||
"[Read From ADLS](#adls)<br>\n",
|
||||
"[Read Pandas DataFrame](#pandas-df)<br>"
|
||||
@@ -110,7 +111,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"When reading delimited files, the only required parameter is `path`. Other parameters (e.g. separator, encoding, whether to use headers, etc.) are available to modify default behavior.In this case, you can read a file by specifying only its location, then retrieve the first 5 rows to evaluate the result."
|
||||
"When reading delimited files, the only required parameter is `path`. Other parameters (e.g. separator, encoding, whether to use headers, etc.) are available to modify default behavior.\n",
|
||||
"In this case, you can read a file by specifying only its location, then retrieve the first 5 rows to evaluate the result."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -632,6 +634,69 @@
|
||||
"df.dtypes"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a id=\"postgresql\"></a>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Read PostgreSQL"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Data Prep can also fetch data from Azure PostgreSQL servers.\n",
|
||||
"\n",
|
||||
"To read data from a PostgreSQL server, first create a data source object that contains the connection information."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"secret = dprep.register_secret(value=\"dpr3pTestU$er\", id=\"dprepPostgresqlUser\")\n",
|
||||
"ds = dprep.PostgreSQLDataSource(server_name=\"dprep-postgresql-test.postgres.database.azure.com\",\n",
|
||||
" database_name=\"dprep-postgresql-testdb\",\n",
|
||||
" user_name=\"dprepPostgresqlReadOnlyUser@dprep-postgresql-test\",\n",
|
||||
" password=secret)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As you can see, the password parameter of `PostgreSQLDataSource` accepts a Secret object as well.\n",
|
||||
"Now that you have created a PostgreSQL data source object, you can proceed to read data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"dflow = dprep.read_postgresql(ds, \"SELECT * FROM public.people\")\n",
|
||||
"dflow.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"dflow.dtypes"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -899,7 +964,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.8"
|
||||
"version": "3.6.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -142,33 +142,6 @@
|
||||
"source": [
|
||||
"profile.columns['X Coordinate'].type_counts"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#TEST CELL: Profile.Compare\n",
|
||||
"import azureml.dataprep as dprep\n",
|
||||
"import math\n",
|
||||
"\n",
|
||||
"lhs_dflow = dprep.auto_read_file('../data/crime-spring.csv')\n",
|
||||
"lhs_profile = lhs_dflow.get_profile(number_of_histogram_bins=100)\n",
|
||||
"rhs_dflow = dprep.auto_read_file('../data/crime-winter.csv')\n",
|
||||
"rhs_profile = rhs_dflow.get_profile(number_of_histogram_bins=100)\n",
|
||||
"\n",
|
||||
"diff = lhs_profile.compare(rhs_profile)\n",
|
||||
"\n",
|
||||
"expected_col1 = dprep.ColumnProfileDifference()\n",
|
||||
"expected_col1.difference_in_count_in_percent = 0\n",
|
||||
"expected_col1.difference_in_histograms = 135349.66146244822\n",
|
||||
"\n",
|
||||
"for actual, expected in zip(diff.column_profile_difference, [expected_col1]) :\n",
|
||||
" assert math.isclose(actual.difference_in_count_in_percent, expected.difference_in_count_in_percent)\n",
|
||||
" assert math.isclose(actual.difference_in_histograms, expected.difference_in_histograms)\n",
|
||||
" break\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -117,6 +117,24 @@
|
||||
"dflow_sql.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can also read from a PostgreSQL database. To do that, you will first get a PostgreSQL database datastore instance and pass it to Data Prep for reading."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"datastore = Datastore(workspace=workspace, name='postgre_test')\n",
|
||||
"dflow_sql = dprep.read_postgresql(data_source=datastore, query='SELECT * FROM public.people')\n",
|
||||
"dflow_sql.head(5)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -404,6 +404,13 @@
|
||||
"* [Sample your data](../../how-to-guides/subsetting-sampling.ipynb)\n",
|
||||
"* [Reference and link between Dataflows](../../how-to-guides/join.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -152,4 +152,8 @@ For an end-to-end tutorial, you may refer to [Dataset tutorial](datasets-tutoria
|
||||
- Register the Dataset in your workspace for easy access in training.
|
||||
- Take snapshots of data to ensure models can be trained with the same data every time.
|
||||
- Use registered Dataset in your training script.
|
||||
- Create and use multiple Dataset definitions to ensure that updates to the definition don't break existing pipelines/scripts.
|
||||
- Create and use multiple Dataset definitions to ensure that updates to the definition don't break existing pipelines/scripts.
|
||||
|
||||
|
||||
|
||||

|
||||
@@ -15,22 +15,25 @@ from sklearn.tree import DecisionTreeClassifier
|
||||
run = Run.get_context()
|
||||
workspace = run.experiment.workspace
|
||||
|
||||
dataset_name = 'training_data'
|
||||
dataset_name = 'clean_Titanic_tutorial'
|
||||
|
||||
snapshot_name = 'train_snapshot'
|
||||
|
||||
dataset = Dataset.get(workspace=workspace, name=dataset_name)
|
||||
dflow = dataset.get_definition()
|
||||
dflow_val, dflow_train = dflow.random_split(percentage=0.3)
|
||||
df = dataset.get_snapshot(snapshot_name=snapshot_name).to_pandas_dataframe()
|
||||
|
||||
y_df = dflow_train.keep_columns(['HasDetections']).to_pandas_dataframe()
|
||||
x_df = dflow_train.drop_columns(['HasDetections']).to_pandas_dataframe()
|
||||
y_val = dflow_val.keep_columns(['HasDetections']).to_pandas_dataframe()
|
||||
x_val = dflow_val.drop_columns(['HasDetections']).to_pandas_dataframe()
|
||||
x_col = ['Pclass', 'Sex', 'SibSp', 'Parch']
|
||||
y_col = ['Survived']
|
||||
x_df = df.loc[:, x_col]
|
||||
y_df = df.loc[:, y_col]
|
||||
|
||||
data = {"train": {"X": x_df, "y": y_df},
|
||||
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
|
||||
|
||||
"validation": {"X": x_val, "y": y_val}}
|
||||
data = {"train": {"X": x_train, "y": y_train},
|
||||
|
||||
"test": {"X": x_test, "y": y_test}}
|
||||
|
||||
clf = DecisionTreeClassifier().fit(data["train"]["X"], data["train"]["y"])
|
||||
|
||||
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_df, y_df)))
|
||||
print('Accuracy of Decision Tree classifier on validation set: {:.2f}'.format(clf.score(x_val, y_val)))
|
||||
print('Accuracy of Decision Tree classifier on training set: {:.2f}'.format(clf.score(x_train, y_train)))
|
||||
print('Accuracy of Decision Tree classifier on test set: {:.2f}'.format(clf.score(x_test, y_test)))
|
||||
|
||||
52
index.html
52
index.html
@@ -1,52 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
|
||||
<meta name="google-site-verification" content="fkZxAt5AEHiB_Wom2R_25VTmNyj19J8lZlfTREsaEN4" />
|
||||
|
||||
<title>Azure Machine Learning</title>
|
||||
|
||||
</head>
|
||||
<body>
|
||||
<h1 id="azure-machine-learning-service-example-notebooks">Azure Machine Learning service example notebooks</h1>
|
||||
<p>This repository contains example notebooks demonstrating the <a href="https://azure.microsoft.com/en-us/services/machine-learning-service/">Azure Machine Learning</a> Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.</p>
|
||||
<div class="figure">
|
||||
|
||||
<img src="https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/service/media/overview-what-is-azure-ml/aml.png" alt="Azure ML workflow" /><p class="caption">Azure ML workflow</p>
|
||||
</div>
|
||||
<h2 id="quick-installation">Quick installation</h2>
|
||||
<pre class="sh"><code>pip install azureml-sdk</code></pre>
|
||||
<p>Read more detailed instructions on <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/NBSETUP.md">how to set up your environment</a> using Azure Notebook service, your own Jupyter notebook server, or Docker.</p>
|
||||
<h2 id="how-to-navigate-and-use-the-example-notebooks">How to navigate and use the example notebooks?</h2>
|
||||
<p>You should always run the <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb">Configuration</a> notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.</p>
|
||||
<p>If you want to...</p>
|
||||
<ul>
|
||||
<li>...try out and explore Azure ML, start with image classification tutorials: <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/img-classification-part1-training.ipynb">Part 1 (Training)</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/img-classification-part2-deploy.ipynb">Part 2 (Deployment)</a>.</li>
|
||||
<li>...prepare your data and do automated machine learning, start with regression tutorials: <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/regression-part1-data-prep.ipynb">Part 1 (Data Prep)</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials/regression-part2-automated-ml.ipynb">Part 2 (Automated ML)</a>.</li>
|
||||
<li>...learn about experimentation and tracking run history, first <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb">train within Notebook</a>, then try <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb">training on remote VM</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/logging-api/logging-api.ipynb">using logging APIs</a>.</li>
|
||||
<li>...train deep learning models at scale, first learn about <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb">Machine Learning Compute</a>, and then try <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb">distributed hyperparameter tuning</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/distributed-pytorch-with-horovod/distributed-pytorch-with-horovod.ipynb">distributed training</a>.</li>
|
||||
<li>...deploy models as a realtime scoring service, first learn the basics by <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb">training within Notebook and deploying to Azure Container Instance</a>, then learn how to <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb">register and manage models, and create Docker images</a>, and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb">production deploy models on Azure Kubernetes Cluster</a>.</li>
|
||||
<li>...deploy models as a batch scoring service, first <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb">train a model within Notebook</a>, learn how to <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb">register and manage models</a>, then <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb">create Machine Learning Compute for scoring compute</a>, and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/pipeline-mpi-batch-prediction.ipynb">use Machine Learning Pipelines to deploy your model</a>.</li>
|
||||
<li>...monitor your deployed models, learn about using <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb">App Insights</a> and <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/enable-data-collection-for-models-in-aks/enable-data-collection-for-models-in-aks.ipynb">model data collection</a>.</li>
|
||||
</ul>
|
||||
<h2 id="tutorials">Tutorials</h2>
|
||||
<p>The <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/tutorials">Tutorials</a> folder contains notebooks for the tutorials described in the <a href="https://aka.ms/aml-docs">Azure Machine Learning documentation</a>.</p>
|
||||
<h2 id="how-to-use-azure-ml">How to use Azure ML</h2>
|
||||
<p>The <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml">How to use Azure ML</a> folder contains specific examples demonstrating the features of the Azure Machine Learning SDK</p>
|
||||
<ul>
|
||||
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training">Training</a> - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets</li>
|
||||
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning">Training with Deep Learning</a> - Examples demonstrating how to build deep learning models using estimators and parameter sweeps</li>
|
||||
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/manage-azureml-service">Manage Azure ML Service</a> - Examples how to perform tasks, such as authenticate against Azure ML service in different ways.</li>
|
||||
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning">Automated Machine Learning</a> - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models</li>
|
||||
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines">Machine Learning Pipelines</a> - Examples showing how to create and use reusable pipelines for training and batch scoring</li>
|
||||
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment">Deployment</a> - Examples showing how to deploy and manage machine learning models and solutions</li>
|
||||
<li><a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-databricks">Azure Databricks</a> - Examples showing how to use Azure ML with Azure Databricks</li>
|
||||
</ul>
|
||||
<h2 id="projects-using-azure-machine-learning">Projects using Azure Machine Learning</h2>
|
||||
<p>Visit following repos to see projects contributed by Azure ML users:</p>
|
||||
<ul>
|
||||
<li><a href="https://github.com/Microsoft/AzureML-BERT">Fine tune natural language processing models using Azure Machine Learning service</a></li>
|
||||
<li><a href="https://github.com/amynic/azureml-sdk-fashion">Fashion MNIST with Azure ML SDK</a></li>
|
||||
</ul>
|
||||
</body>
|
||||
</html>
|
||||
52
pr.md
52
pr.md
@@ -1,52 +0,0 @@
|
||||
# Azure Machine Learning Resources & Links
|
||||
## Product Documentation
|
||||
- [Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/)
|
||||
- [Azure Machine Learning Studio](https://docs.microsoft.com/en-us/azure/machine-learning/studio/)
|
||||
|
||||
## Product Team Blogs
|
||||
- [What’s new in Azure Machine Learning service](https://aka.ms/aml-blog-whats-new)
|
||||
- [Announcing automated ML capability in Azure Machine Learning](https://aka.ms/aml-blog-automl)
|
||||
- [Experimentation using Azure Machine Learning](https://aka.ms/aml-blog-experimentation)
|
||||
- [Azure AI – Making AI real for business](https://aka.ms/aml-blog-overview)
|
||||
|
||||
## Community Blogs
|
||||
- [Power Bat – How Spektacom is Powering the Game of Cricket with Microsoft AI](https://blogs.technet.microsoft.com/machinelearning/2018/10/11/power-bat-how-spektacom-is-powering-the-game-of-cricket-with-microsoft-ai/)
|
||||
|
||||
## Ignite 2018 Public Preview Launch Sessions
|
||||
- [AI with Azure Machine Learning services: Simplifying the data science process](https://myignite.techcommunity.microsoft.com/sessions/66248)
|
||||
- [AI TechTalk: Azure Machine Learning SDK - a walkthrough](https://myignite.techcommunity.microsoft.com/sessions/66265)
|
||||
- [AI for an intelligent cloud and intelligent edge: Discover, deploy, and manage with Azure ML services](https://myignite.techcommunity.microsoft.com/sessions/65389)
|
||||
- [Generating high quality models efficiently using Automated ML and Hyperparameter Tuning](https://myignite.techcommunity.microsoft.com/sessions/66245)
|
||||
- [AI for pros: Deep learning with PyTorch using the Azure Data Science Virtual Machine and scaling training with Azure ML](https://myignite.techcommunity.microsoft.com/sessions/66244)
|
||||
|
||||
## Get-started Videos on YouTube
|
||||
- [Get started with Python SDK](https://youtu.be/VIsXeTuW3FU)
|
||||
- [Get started from Azure Portal](https://youtu.be/lCkYUHV86Mk)
|
||||
|
||||
|
||||
## Third Party Articles
|
||||
- [Azure’s new machine learning features embrace Python](https://www.infoworld.com/article/3306840/azure/azures-new-machine-learning-features-embrace-python.html) (InfoWorld)
|
||||
- [How to use Azure ML in Windows 10](https://www.infoworld.com/article/3308381/azure/how-to-use-azure-ml-in-windows-10.html) (InfoWorld)
|
||||
- [How Azure ML Streamlines Cloud-based Machine Learning](https://thenewstack.io/how-the-azure-ml-streamlines-cloud-based-machine-learning/) (The New Stack)
|
||||
- [Facebook launches PyTorch 1.0 with integrations for Google Cloud, AWS, and Azure Machine Learning](https://venturebeat.com/2018/10/02/facebook-launches-pytorch-1-0-integrations-for-google-cloud-aws-and-azure-machine-learning/) (VentureBeat)
|
||||
- [How Microsoft Uses Machine Learning to Help You Build Machine Learning Pipelines](https://towardsdatascience.com/how-microsoft-uses-machine-learning-to-help-you-build-machine-learning-pipelines-be75f710613b) (Towards Data Science)
|
||||
- [Microsoft's Machine Learning Tools for Developers Get Smarter](https://techcrunch.com/2018/09/24/microsofts-machine-learning-tools-for-developers-get-smarter/) (TechCrunch)
|
||||
- [Microsoft introduces Azure service to automatically build AI models](https://venturebeat.com/2018/09/24/microsoft-introduces-azure-service-to-automatically-build-ai-models/) (VentureBeat)
|
||||
|
||||
## Community Projects
|
||||
- [Use Papermill with Azure ML](https://github.com/jreynolds01/papermill_execution_azureml/)
|
||||
- [Fashion MNIST](https://github.com/amynic/azureml-sdk-fashion)
|
||||
- Keras on Databricks
|
||||
- [Samples from CSS](https://github.com/Azure/AMLSamples)
|
||||
|
||||
|
||||
## Azure Machine Learning Studio Resources
|
||||
- [A-Z Machine Learning using Azure Machine Learning (AzureML)](https://www.udemy.com/machine-learning-using-azureml/)
|
||||
- [Machine Learning In The Cloud With Azure Machine Learning](https://www.udemy.com/machine-learning-in-the-cloud-with-azure-machine-learning/)
|
||||
- [How to Become A Data Scientist Using Azure Machine Learning](https://www.udemy.com/azure-machine-learning-introduction/)
|
||||
- [Learn Azure Machine Learning from scratch](https://www.udemy.com/learn-azure-machine-learning-from-scratch/)
|
||||
- [Azure Machine Learning Studio PowerShell Module](https://aka.ms/amlps)
|
||||
|
||||
## Forum Help
|
||||
- [Azure Machine Learning service](https://social.msdn.microsoft.com/Forums/en-US/home?forum=AzureMachineLearningService)
|
||||
- [Azure Machine Learning Studio](https://social.msdn.microsoft.com/forums/azure/en-US/home?forum=MachineLearning)
|
||||
@@ -18,3 +18,5 @@ If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwi
|
||||
* [Part 2](regression-part2-automated-ml.ipynb): Train a model using Automated Machine Learning.
|
||||
|
||||
Also find quickstarts and how-tos on the [official documentation site for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/).
|
||||
|
||||

|
||||
@@ -536,7 +536,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# specify show_output to True for a verbose log\n",
|
||||
"run.wait_for_completion(show_output=False) "
|
||||
"run.wait_for_completion(show_output=True) "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -626,6 +626,13 @@
|
||||
"\n",
|
||||
"> [Tutorial 2 - Deploy models](img-classification-part2-deploy.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -587,6 +587,13 @@
|
||||
" \n",
|
||||
"You can also try out the [regression tutorial](regression-part1-data-prep.ipynb)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -599,6 +599,13 @@
|
||||
"\n",
|
||||
"> [Tutorial #2: Train regression model](regression-part2-automated-ml.ipynb)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -25,7 +25,7 @@
|
||||
"> * Run the model locally with custom parameters\n",
|
||||
"> * Explore the results\n",
|
||||
"\n",
|
||||
"If you don\u00e2\u20ac\u2122t have an Azure subscription, create a [free account](https://aka.ms/AMLfree) before you begin. \n",
|
||||
"If you do not have an Azure subscription, create a [free account](https://aka.ms/AMLfree) before you begin. \n",
|
||||
"\n",
|
||||
"> Code in this article was tested with Azure Machine Learning SDK version 1.0.0\n",
|
||||
"\n",
|
||||
@@ -485,7 +485,7 @@
|
||||
">The resources you created can be used as prerequisites to other Azure Machine Learning service tutorials and how-to articles. \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"If you don't plan to use the resources you created, delete them, so you don't incur any charges:\n",
|
||||
"If you do not plan to use the resources you created, delete them, so you do not incur any charges:\n",
|
||||
"\n",
|
||||
"1. In the Azure portal, select **Resource groups** on the far left.\n",
|
||||
"\n",
|
||||
@@ -510,6 +510,13 @@
|
||||
"\n",
|
||||
"[Deploy your model](https://docs.microsoft.com/azure/machine-learning/service/tutorial-deploy-models-with-aml) with Azure Machine Learning."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
Reference in New Issue
Block a user