mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 09:37:04 -05:00
Compare commits
1 Commits
update-spa
...
release_up
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6948535d8c |
@@ -13,7 +13,7 @@ Read more detailed instructions on [how to set up your environment](./NBSETUP.md
|
|||||||
|
|
||||||
## How to navigate and use the example notebooks?
|
## How to navigate and use the example notebooks?
|
||||||
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, you should always run the [Configuration](./configuration.ipynb) notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.
|
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, you should always run the [Configuration](./configuration.ipynb) notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.
|
||||||
This [index](.index.md) should assist in navigating the Azure Machine Learning notebook samples and encourage efficient retrieval of topics and content.
|
This [index](./index.md) should assist in navigating the Azure Machine Learning notebook samples and encourage efficient retrieval of topics and content.
|
||||||
|
|
||||||
If you want to...
|
If you want to...
|
||||||
|
|
||||||
|
|||||||
@@ -103,7 +103,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using version 1.1.1rc0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.1.2rc0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -28,7 +28,6 @@ dependencies:
|
|||||||
- azureml-contrib-interpret
|
- azureml-contrib-interpret
|
||||||
- pytorch-transformers==1.0.0
|
- pytorch-transformers==1.0.0
|
||||||
- spacy==2.1.8
|
- spacy==2.1.8
|
||||||
- joblib
|
|
||||||
- onnxruntime==1.0.0
|
- onnxruntime==1.0.0
|
||||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||||
|
|
||||||
|
|||||||
@@ -29,7 +29,6 @@ dependencies:
|
|||||||
- azureml-contrib-interpret
|
- azureml-contrib-interpret
|
||||||
- pytorch-transformers==1.0.0
|
- pytorch-transformers==1.0.0
|
||||||
- spacy==2.1.8
|
- spacy==2.1.8
|
||||||
- joblib
|
|
||||||
- onnxruntime==1.0.0
|
- onnxruntime==1.0.0
|
||||||
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
|
||||||
|
|
||||||
|
|||||||
@@ -320,7 +320,6 @@
|
|||||||
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
"|**n_cross_validations**|Number of cross validation splits.|\n",
|
||||||
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
"|**training_data**|Input dataset, containing both features and label column.|\n",
|
||||||
"|**label_column_name**|The name of the label column.|\n",
|
"|**label_column_name**|The name of the label column.|\n",
|
||||||
"|**model_explainability**|Indicate to explain each trained pipeline or not.|\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
|
||||||
]
|
]
|
||||||
@@ -352,7 +351,6 @@
|
|||||||
" training_data = train_data,\n",
|
" training_data = train_data,\n",
|
||||||
" label_column_name = label,\n",
|
" label_column_name = label,\n",
|
||||||
" validation_data = validation_dataset,\n",
|
" validation_data = validation_dataset,\n",
|
||||||
" model_explainability=True,\n",
|
|
||||||
" **automl_settings\n",
|
" **automl_settings\n",
|
||||||
" )"
|
" )"
|
||||||
]
|
]
|
||||||
@@ -500,11 +498,11 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Wait for the best model explanation run to complete\n",
|
"# Wait for the best model explanation run to complete\n",
|
||||||
"from azureml.train.automl.run import AutoMLRun\n",
|
"from azureml.core.run import Run\n",
|
||||||
"model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')\n",
|
"model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')\n",
|
||||||
"print(model_explainability_run_id)\n",
|
"print(model_explainability_run_id)\n",
|
||||||
"if model_explainability_run_id is not None:\n",
|
"if model_explainability_run_id is not None:\n",
|
||||||
" model_explainability_run = AutoMLRun(experiment=experiment, run_id=model_explainability_run_id)\n",
|
" model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)\n",
|
||||||
" model_explainability_run.wait_for_completion()\n",
|
" model_explainability_run.wait_for_completion()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Get the best run object\n",
|
"# Get the best run object\n",
|
||||||
|
|||||||
@@ -343,7 +343,7 @@
|
|||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.train.automl import AutoMLConfig\n",
|
"from azureml.train.automl import AutoMLConfig\n",
|
||||||
"from azureml.train.automl import AutoMLStep\n",
|
"from azureml.pipeline.steps import AutoMLStep\n",
|
||||||
"\n",
|
"\n",
|
||||||
"automl_settings = {\n",
|
"automl_settings = {\n",
|
||||||
" \"iteration_timeout_minutes\": 10,\n",
|
" \"iteration_timeout_minutes\": 10,\n",
|
||||||
|
|||||||
@@ -459,8 +459,8 @@
|
|||||||
"# use forecast_quantiles function, not the forecast() one\n",
|
"# use forecast_quantiles function, not the forecast() one\n",
|
||||||
"y_pred_quantiles = fitted_model.forecast_quantiles(X_test)\n",
|
"y_pred_quantiles = fitted_model.forecast_quantiles(X_test)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# it all nicely aligns column-wise\n",
|
"# quantile forecasts returned in a Dataframe along with the time and grain columns \n",
|
||||||
"pd.concat([X_test.reset_index(), y_pred_quantiles], axis=1)"
|
"y_pred_quantiles"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -514,7 +514,7 @@
|
|||||||
" content = cefr.read()\n",
|
" content = cefr.read()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Replace the values in train_explainer.py file with the appropriate values\n",
|
"# Replace the values in train_explainer.py file with the appropriate values\n",
|
||||||
"content = content.replace('<<experimnet_name>>', automl_run.experiment.name) # your experiment name.\n",
|
"content = content.replace('<<experiment_name>>', automl_run.experiment.name) # your experiment name.\n",
|
||||||
"content = content.replace('<<run_id>>', automl_run.id) # Run-id of the AutoML run for which you want to explain the model.\n",
|
"content = content.replace('<<run_id>>', automl_run.id) # Run-id of the AutoML run for which you want to explain the model.\n",
|
||||||
"content = content.replace('<<target_column_name>>', 'ERP') # Your target column name\n",
|
"content = content.replace('<<target_column_name>>', 'ERP') # Your target column name\n",
|
||||||
"content = content.replace('<<task>>', 'regression') # Training task type\n",
|
"content = content.replace('<<task>>', 'regression') # Training task type\n",
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ run = Run.get_context()
|
|||||||
ws = run.experiment.workspace
|
ws = run.experiment.workspace
|
||||||
|
|
||||||
# Get the AutoML run object from the experiment name and the workspace
|
# Get the AutoML run object from the experiment name and the workspace
|
||||||
experiment = Experiment(ws, '<<experimnet_name>>')
|
experiment = Experiment(ws, '<<experiment_name>>')
|
||||||
automl_run = Run(experiment=experiment, run_id='<<run_id>>')
|
automl_run = Run(experiment=experiment, run_id='<<run_id>>')
|
||||||
|
|
||||||
# Check if this AutoML model is explainable
|
# Check if this AutoML model is explainable
|
||||||
|
|||||||
@@ -0,0 +1,314 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||||
|
"\n",
|
||||||
|
"Licensed under the MIT License."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
""
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Deploying a web service to Azure Kubernetes Service (AKS)\n",
|
||||||
|
"This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it. \n",
|
||||||
|
"We then test and delete the service, image and model."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import azureml.core\n",
|
||||||
|
"print(azureml.core.VERSION)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Get workspace\n",
|
||||||
|
"Load existing workspace from the config file info."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.workspace import Workspace\n",
|
||||||
|
"\n",
|
||||||
|
"ws = Workspace.from_config()\n",
|
||||||
|
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Register the model\n",
|
||||||
|
"Register an existing trained model, add descirption and tags. Prior to registering the model, you should have a TensorFlow [Saved Model](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md) in the `resnet50` directory. You can download a [pretrained resnet50](http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz) and unpack it to that directory."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"#Register the model\n",
|
||||||
|
"from azureml.core.model import Model\n",
|
||||||
|
"model = Model.register(model_path = \"resnet50\", # this points to a local file\n",
|
||||||
|
" model_name = \"resnet50\", # this is the name the model is registered as\n",
|
||||||
|
" tags = {'area': \"Image classification\", 'type': \"classification\"},\n",
|
||||||
|
" description = \"Image classification trained on Imagenet Dataset\",\n",
|
||||||
|
" workspace = ws)\n",
|
||||||
|
"\n",
|
||||||
|
"print(model.name, model.description, model.version)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Provision the AKS Cluster\n",
|
||||||
|
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from azureml.core.compute import ComputeTarget, AksCompute\n",
|
||||||
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
|
"\n",
|
||||||
|
"# Choose a name for your GPU cluster\n",
|
||||||
|
"gpu_cluster_name = \"aks-gpu-cluster\"\n",
|
||||||
|
"\n",
|
||||||
|
"# Verify that cluster does not exist already\n",
|
||||||
|
"try:\n",
|
||||||
|
" gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)\n",
|
||||||
|
" print(\"Found existing gpu cluster\")\n",
|
||||||
|
"except ComputeTargetException:\n",
|
||||||
|
" print(\"Creating new gpu-cluster\")\n",
|
||||||
|
" \n",
|
||||||
|
" # Specify the configuration for the new cluster\n",
|
||||||
|
" compute_config = AksCompute.provisioning_configuration(cluster_purpose=AksCompute.ClusterPurpose.DEV_TEST,\n",
|
||||||
|
" agent_count=1,\n",
|
||||||
|
" vm_size=\"Standard_NV6\")\n",
|
||||||
|
" # Create the cluster with the specified name and configuration\n",
|
||||||
|
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)\n",
|
||||||
|
"\n",
|
||||||
|
" # Wait for the cluster to complete, show the output log\n",
|
||||||
|
" gpu_cluster.wait_for_completion(show_output=True)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Deploy the model as a web service to AKS\n",
|
||||||
|
"\n",
|
||||||
|
"First create a scoring script"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%%writefile score.py\n",
|
||||||
|
"import tensorflow as tf\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import json\n",
|
||||||
|
"import os\n",
|
||||||
|
"from azureml.contrib.services.aml_request import AMLRequest, rawhttp\n",
|
||||||
|
"from azureml.contrib.services.aml_response import AMLResponse\n",
|
||||||
|
"\n",
|
||||||
|
"def init():\n",
|
||||||
|
" global session\n",
|
||||||
|
" global input_name\n",
|
||||||
|
" global output_name\n",
|
||||||
|
" \n",
|
||||||
|
" session = tf.Session()\n",
|
||||||
|
"\n",
|
||||||
|
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
|
||||||
|
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
|
||||||
|
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
|
||||||
|
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'resnet50')\n",
|
||||||
|
" model = tf.saved_model.loader.load(session, ['serve'], model_path)\n",
|
||||||
|
" if len(model.signature_def['serving_default'].inputs) > 1:\n",
|
||||||
|
" raise ValueError(\"This score.py only supports one input\")\n",
|
||||||
|
" input_name = [tensor.name for tensor in model.signature_def['serving_default'].inputs.values()][0]\n",
|
||||||
|
" output_name = [tensor.name for tensor in model.signature_def['serving_default'].outputs.values()]\n",
|
||||||
|
" \n",
|
||||||
|
"\n",
|
||||||
|
"@rawhttp\n",
|
||||||
|
"def run(request):\n",
|
||||||
|
" if request.method == 'POST':\n",
|
||||||
|
" reqBody = request.get_data(False)\n",
|
||||||
|
" resp = score(reqBody)\n",
|
||||||
|
" return AMLResponse(resp, 200)\n",
|
||||||
|
" if request.method == 'GET':\n",
|
||||||
|
" respBody = str.encode(\"GET is not supported\")\n",
|
||||||
|
" return AMLResponse(respBody, 405)\n",
|
||||||
|
" return AMLResponse(\"bad request\", 500)\n",
|
||||||
|
"\n",
|
||||||
|
"def score(data):\n",
|
||||||
|
" result = session.run(output_name, {input_name: [data]})\n",
|
||||||
|
" return json.dumps(result[1].tolist())\n",
|
||||||
|
"\n",
|
||||||
|
"if __name__ == \"__main__\":\n",
|
||||||
|
" init()\n",
|
||||||
|
" with open(\"test_image.jpg\", 'rb') as f:\n",
|
||||||
|
" content = f.read()\n",
|
||||||
|
" print(score(content))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Now create the deployment configuration objects and deploy the model as a webservice."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Set the web service configuration (using default here)\n",
|
||||||
|
"from azureml.core.model import InferenceConfig\n",
|
||||||
|
"from azureml.core.webservice import AksWebservice\n",
|
||||||
|
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||||
|
"from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE\n",
|
||||||
|
"\n",
|
||||||
|
"env = Environment('deploytocloudenv')\n",
|
||||||
|
"# Please see [Azure ML Containers repository](https://github.com/Azure/AzureML-Containers#featured-tags)\n",
|
||||||
|
"# for open-sourced GPU base images.\n",
|
||||||
|
"env.docker.base_image = DEFAULT_GPU_IMAGE\n",
|
||||||
|
"env.python.conda_dependencies = CondaDependencies.create(conda_packages=['tensorflow-gpu==1.12.0','numpy'],\n",
|
||||||
|
" pip_packages=['azureml-contrib-services', 'azureml-defaults'])\n",
|
||||||
|
"\n",
|
||||||
|
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=env)\n",
|
||||||
|
"aks_config = AksWebservice.deploy_configuration()\n",
|
||||||
|
"\n",
|
||||||
|
"# # Enable token auth and disable (key) auth on the webservice\n",
|
||||||
|
"# aks_config = AksWebservice.deploy_configuration(token_auth_enabled=True, auth_enabled=False)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%%time\n",
|
||||||
|
"aks_service_name ='gpu-rn50'\n",
|
||||||
|
"\n",
|
||||||
|
"aks_service = Model.deploy(workspace=ws,\n",
|
||||||
|
" name=aks_service_name,\n",
|
||||||
|
" models=[model],\n",
|
||||||
|
" inference_config=inference_config,\n",
|
||||||
|
" deployment_config=aks_config,\n",
|
||||||
|
" deployment_target=gpu_cluster)\n",
|
||||||
|
"\n",
|
||||||
|
"aks_service.wait_for_deployment(show_output = True)\n",
|
||||||
|
"print(aks_service.state)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Test the web service\n",
|
||||||
|
"We test the web sevice by passing the test images content."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%%time\n",
|
||||||
|
"import requests\n",
|
||||||
|
"\n",
|
||||||
|
"# if (key) auth is enabled, fetch keys and include in the request\n",
|
||||||
|
"key1, key2 = aks_service.get_keys()\n",
|
||||||
|
"\n",
|
||||||
|
"headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + key1}\n",
|
||||||
|
"\n",
|
||||||
|
"# # if token auth is enabled, fetch token and include in the request\n",
|
||||||
|
"# access_token, fetch_after = aks_service.get_token()\n",
|
||||||
|
"# headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + access_token}\n",
|
||||||
|
"\n",
|
||||||
|
"test_sample = open('snowleopardgaze.jpg', 'rb').read()\n",
|
||||||
|
"resp = requests.post(aks_service.scoring_uri, test_sample, headers=headers)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Clean up\n",
|
||||||
|
"Delete the service, image, model and compute target"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"%%time\n",
|
||||||
|
"aks_service.delete()\n",
|
||||||
|
"model.delete()\n",
|
||||||
|
"gpu_cluster.delete()\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"authors": [
|
||||||
|
{
|
||||||
|
"name": "aashishb"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3.6",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python36"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.6.6"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 2
|
||||||
|
}
|
||||||
@@ -0,0 +1,5 @@
|
|||||||
|
name: production-deploy-to-aks-gpu
|
||||||
|
dependencies:
|
||||||
|
- pip:
|
||||||
|
- azureml-sdk
|
||||||
|
- tensorflow
|
||||||
Binary file not shown.
|
After Width: | Height: | Size: 61 KiB |
@@ -76,7 +76,7 @@
|
|||||||
"from azureml.core.runconfig import RunConfiguration\n",
|
"from azureml.core.runconfig import RunConfiguration\n",
|
||||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||||
"\n",
|
"\n",
|
||||||
"from azureml.train.automl.runtime import AutoMLStep\n",
|
"from azureml.pipeline.steps import AutoMLStep\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Check core SDK version number\n",
|
"# Check core SDK version number\n",
|
||||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||||
@@ -173,12 +173,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# create a new RunConfig object\n",
|
"# create a new RunConfig object\n",
|
||||||
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
"conda_run_config = RunConfiguration(framework=\"python\")\n",
|
||||||
"\n",
|
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'])\n",
|
||||||
"conda_run_config.environment.docker.enabled = True\n",
|
|
||||||
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
|
|
||||||
"\n",
|
|
||||||
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], \n",
|
|
||||||
" conda_packages=['numpy', 'py-xgboost<=0.80'])\n",
|
|
||||||
"conda_run_config.environment.python.conda_dependencies = cd\n",
|
"conda_run_config.environment.python.conda_dependencies = cd\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print('run config is ready')"
|
"print('run config is ready')"
|
||||||
|
|||||||
@@ -100,7 +100,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# Check core SDK version number\n",
|
"# Check core SDK version number\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using SDK version 1.1.1rc0, you are currently running version\", azureml.core.VERSION)"
|
"print(\"This notebook was created using SDK version 1.1.2rc0, you are currently running version\", azureml.core.VERSION)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -145,9 +145,12 @@
|
|||||||
"import requests\n",
|
"import requests\n",
|
||||||
"import os\n",
|
"import os\n",
|
||||||
"\n",
|
"\n",
|
||||||
"tf_code = requests.get(\"https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py\")\n",
|
"tf_code = requests.get(\"https://raw.githubusercontent.com/tensorflow/tensorflow/r2.1/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py\")\n",
|
||||||
|
"input_code = requests.get(\"https://raw.githubusercontent.com/tensorflow/tensorflow/r2.1/tensorflow/examples/tutorials/mnist/input_data.py\")\n",
|
||||||
"with open(os.path.join(exp_dir, \"mnist_with_summaries.py\"), \"w\") as file:\n",
|
"with open(os.path.join(exp_dir, \"mnist_with_summaries.py\"), \"w\") as file:\n",
|
||||||
" file.write(tf_code.text)"
|
" file.write(tf_code.text.replace(\"from tensorflow.examples.tutorials.mnist import input_data\", \"import input_data\"))\n",
|
||||||
|
"with open(os.path.join(exp_dir, \"input_data.py\"), \"w\") as file:\n",
|
||||||
|
" file.write(input_code.text)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -186,7 +189,7 @@
|
|||||||
"from azureml.core import Experiment\n",
|
"from azureml.core import Experiment\n",
|
||||||
"from azureml.core.script_run_config import ScriptRunConfig\n",
|
"from azureml.core.script_run_config import ScriptRunConfig\n",
|
||||||
"\n",
|
"\n",
|
||||||
"logs_dir = os.path.join(os.curdir, \"logs\")\n",
|
"logs_dir = os.path.join(os.curdir, os.path.join(\"logs\", \"tb-logs\"))\n",
|
||||||
"data_dir = os.path.abspath(os.path.join(os.curdir, \"mnist_data\"))\n",
|
"data_dir = os.path.abspath(os.path.join(os.curdir, \"mnist_data\"))\n",
|
||||||
"\n",
|
"\n",
|
||||||
"if not path.exists(data_dir):\n",
|
"if not path.exists(data_dir):\n",
|
||||||
@@ -334,7 +337,8 @@
|
|||||||
"tf_estimator = TensorFlow(source_directory=exp_dir,\n",
|
"tf_estimator = TensorFlow(source_directory=exp_dir,\n",
|
||||||
" compute_target=attached_dsvm_compute,\n",
|
" compute_target=attached_dsvm_compute,\n",
|
||||||
" entry_script='mnist_with_summaries.py',\n",
|
" entry_script='mnist_with_summaries.py',\n",
|
||||||
" script_params=script_params)\n",
|
" script_params=script_params,\n",
|
||||||
|
" framework_version=\"2.0\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
"run = exp.submit(tf_estimator)\n",
|
"run = exp.submit(tf_estimator)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -396,17 +400,16 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
|
"from azureml.core.compute import AmlCompute\n",
|
||||||
"\n",
|
|
||||||
"# choose a name for your cluster\n",
|
"# choose a name for your cluster\n",
|
||||||
"cluster_name = \"cpucluster\"\n",
|
"cluster_name = \"cpu-cluster\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"cts = ws.compute_targets\n",
|
"cts = ws.compute_targets\n",
|
||||||
"found = False\n",
|
"found = False\n",
|
||||||
"if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':\n",
|
"if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':\n",
|
||||||
" found = True\n",
|
" found = True\n",
|
||||||
" print('Found existing compute target.')\n",
|
" print('Found existing compute target.')\n",
|
||||||
" compute_target = cts[cluster_name]\n",
|
" compute_target = cts[cluster_name]\n",
|
||||||
"if not found:\n",
|
"if not found:\n",
|
||||||
" print('Creating a new compute target...')\n",
|
" print('Creating a new compute target...')\n",
|
||||||
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', \n",
|
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', \n",
|
||||||
@@ -444,7 +447,8 @@
|
|||||||
"tf_estimator = TensorFlow(source_directory=exp_dir,\n",
|
"tf_estimator = TensorFlow(source_directory=exp_dir,\n",
|
||||||
" compute_target=compute_target,\n",
|
" compute_target=compute_target,\n",
|
||||||
" entry_script='mnist_with_summaries.py',\n",
|
" entry_script='mnist_with_summaries.py',\n",
|
||||||
" script_params=script_params)\n",
|
" script_params=script_params,\n",
|
||||||
|
" framework_version=\"2.0\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
"run = exp.submit(tf_estimator)\n",
|
"run = exp.submit(tf_estimator)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -539,6 +543,24 @@
|
|||||||
"name": "roastala"
|
"name": "roastala"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
"category": "training",
|
||||||
|
"compute": [
|
||||||
|
"Local",
|
||||||
|
"DSVM",
|
||||||
|
"AML Compute"
|
||||||
|
],
|
||||||
|
"datasets": [
|
||||||
|
"None"
|
||||||
|
],
|
||||||
|
"deployment": [
|
||||||
|
"None"
|
||||||
|
],
|
||||||
|
"exclude_from_index": false,
|
||||||
|
"framework": [
|
||||||
|
"TensorFlow"
|
||||||
|
],
|
||||||
|
"friendly_name": "Tensorboard integration with run history",
|
||||||
|
"index_order": 3,
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3.6",
|
"display_name": "Python 3.6",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
@@ -556,28 +578,10 @@
|
|||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.6.6"
|
"version": "3.6.6"
|
||||||
},
|
},
|
||||||
"friendly_name": "Tensorboard integration with run history",
|
|
||||||
"exclude_from_index": false,
|
|
||||||
"index_order": 3,
|
|
||||||
"category": "training",
|
|
||||||
"task": "Run a TensorFlow job and view its Tensorboard output live",
|
|
||||||
"datasets": [
|
|
||||||
"None"
|
|
||||||
],
|
|
||||||
"compute": [
|
|
||||||
"Local",
|
|
||||||
"DSVM",
|
|
||||||
"AML Compute"
|
|
||||||
],
|
|
||||||
"deployment": [
|
|
||||||
"None"
|
|
||||||
],
|
|
||||||
"framework": [
|
|
||||||
"TensorFlow"
|
|
||||||
],
|
|
||||||
"tags": [
|
"tags": [
|
||||||
"None"
|
"None"
|
||||||
]
|
],
|
||||||
|
"task": "Run a TensorFlow job and view its Tensorboard output live"
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 2
|
"nbformat_minor": 2
|
||||||
|
|||||||
@@ -3,4 +3,5 @@ dependencies:
|
|||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
- azureml-tensorboard
|
- azureml-tensorboard
|
||||||
- tensorflow<1.15
|
- tensorflow
|
||||||
|
- setuptools>=41.0.0
|
||||||
|
|||||||
@@ -3,7 +3,8 @@ dependencies:
|
|||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
- azureml-tensorboard
|
- azureml-tensorboard
|
||||||
- tensorflow<1.15.0
|
- tensorflow
|
||||||
- tqdm
|
- tqdm
|
||||||
- scipy
|
- scipy
|
||||||
- sklearn
|
- sklearn
|
||||||
|
- setuptools>=41.0.0
|
||||||
|
|||||||
@@ -1,10 +0,0 @@
|
|||||||
name: labeled-datasets
|
|
||||||
dependencies:
|
|
||||||
- pip:
|
|
||||||
- azureml-sdk
|
|
||||||
- azureml-dataprep
|
|
||||||
- pandas
|
|
||||||
- fuse
|
|
||||||
- azureml.contrib.dataset
|
|
||||||
- matplotlib
|
|
||||||
- torchvision
|
|
||||||
@@ -141,7 +141,7 @@
|
|||||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# choose a name for your cluster\n",
|
"# choose a name for your cluster\n",
|
||||||
"cluster_name = \"your-cluster-name\"\n",
|
"cluster_name = \"gpu-cluster\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"try:\n",
|
"try:\n",
|
||||||
" compute_target = ComputeTarget(workspace=workspace, name=cluster_name)\n",
|
" compute_target = ComputeTarget(workspace=workspace, name=cluster_name)\n",
|
||||||
|
|||||||
@@ -3,5 +3,5 @@ dependencies:
|
|||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
- azureml-dataprep
|
- azureml-dataprep
|
||||||
- pandas
|
- pandas<=0.23.4
|
||||||
- fuse
|
- fuse
|
||||||
|
|||||||
@@ -543,7 +543,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"authors": [
|
"authors": [
|
||||||
{
|
{
|
||||||
"name": "ylxiong"
|
"name": "jamgan"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"category": "tutorial",
|
"category": "tutorial",
|
||||||
|
|||||||
@@ -3,4 +3,4 @@ dependencies:
|
|||||||
- pip:
|
- pip:
|
||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
- azureml-dataprep
|
- azureml-dataprep
|
||||||
- pandas
|
- pandas<=0.23.4
|
||||||
|
|||||||
@@ -529,8 +529,9 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"print(run.get_metrics())\n",
|
"run.wait_for_completion()\n",
|
||||||
"metrics = run.get_metrics()"
|
"metrics = run.get_metrics()\n",
|
||||||
|
"print(metrics)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -4,6 +4,6 @@ dependencies:
|
|||||||
- azureml-sdk
|
- azureml-sdk
|
||||||
- azureml-widgets
|
- azureml-widgets
|
||||||
- azureml-dataprep
|
- azureml-dataprep
|
||||||
- pandas
|
- pandas<=0.23.4
|
||||||
- fuse
|
- fuse
|
||||||
- scikit-learn
|
- scikit-learn
|
||||||
|
|||||||
1
index.md
1
index.md
@@ -117,6 +117,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
|
|||||||
| [enable-app-insights-in-production-service](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) | | | | | | |
|
| [enable-app-insights-in-production-service](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) | | | | | | |
|
||||||
| [onnx-model-register-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-model-register-and-deploy.ipynb) | | | | | | |
|
| [onnx-model-register-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-model-register-and-deploy.ipynb) | | | | | | |
|
||||||
| [production-deploy-to-aks](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) | | | | | | |
|
| [production-deploy-to-aks](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) | | | | | | |
|
||||||
|
| [production-deploy-to-aks-gpu](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.ipynb) | | | | | | |
|
||||||
| [tensorflow-model-register-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/tensorflow/tensorflow-model-register-and-deploy.ipynb) | | | | | | |
|
| [tensorflow-model-register-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/tensorflow/tensorflow-model-register-and-deploy.ipynb) | | | | | | |
|
||||||
| [explain-model-on-amlcompute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/explain-model/azure-integration/remote-explanation/explain-model-on-amlcompute.ipynb) | | | | | | |
|
| [explain-model-on-amlcompute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/explain-model/azure-integration/remote-explanation/explain-model-on-amlcompute.ipynb) | | | | | | |
|
||||||
| [save-retrieve-explanations-run-history](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/explain-model/azure-integration/run-history/save-retrieve-explanations-run-history.ipynb) | | | | | | |
|
| [save-retrieve-explanations-run-history](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/explain-model/azure-integration/run-history/save-retrieve-explanations-run-history.ipynb) | | | | | | |
|
||||||
|
|||||||
@@ -102,7 +102,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"import azureml.core\n",
|
"import azureml.core\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(\"This notebook was created using version 1.1.1rc0 of the Azure ML SDK\")\n",
|
"print(\"This notebook was created using version 1.1.2rc0 of the Azure ML SDK\")\n",
|
||||||
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -28,7 +28,6 @@ The following tutorials are intended to provide examples of more advanced featur
|
|||||||
| Tutorial | Description | Notebook | Task | Framework |
|
| Tutorial | Description | Notebook | Task | Framework |
|
||||||
| --- | --- | --- | --- | --- |
|
| --- | --- | --- | --- | --- |
|
||||||
| [Build an Azure Machine Learning pipeline for batch scoring](https://docs.microsoft.com/azure/machine-learning/tutorial-pipeline-batch-scoring-classification) | Create an Azure Machine Learning pipeline to run batch scoring image classification jobs | [tutorial-pipeline-batch-scoring-classification.ipynb](machine-learning-pipelines-advanced/tutorial-pipeline-batch-scoring-classification.ipynb) | Image Classification | TensorFlow
|
| [Build an Azure Machine Learning pipeline for batch scoring](https://docs.microsoft.com/azure/machine-learning/tutorial-pipeline-batch-scoring-classification) | Create an Azure Machine Learning pipeline to run batch scoring image classification jobs | [tutorial-pipeline-batch-scoring-classification.ipynb](machine-learning-pipelines-advanced/tutorial-pipeline-batch-scoring-classification.ipynb) | Image Classification | TensorFlow
|
||||||
Complete these tutorials to learn how to train and deploy models using Azure Machine Learning services and Python SDK. These Notebooks accompany the tutorial articles for:
|
|
||||||
|
|
||||||
For additional documentation and resources, see the [official documentation site for Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/).
|
For additional documentation and resources, see the [official documentation site for Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/).
|
||||||
|
|
||||||
|
|||||||
@@ -126,7 +126,8 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create or Attach existing compute resource\n",
|
"### Create or Attach existing compute resource\n",
|
||||||
"By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.\n",
|
"By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. You will submit Python code to run on this VM later in the tutorial. \n",
|
||||||
|
"The code below creates the compute clusters for you if they don't already exist in your workspace.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process."
|
"**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process."
|
||||||
]
|
]
|
||||||
@@ -263,7 +264,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Train on a remote cluster\n",
|
"## Train on a remote cluster\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For this task, submit the job to the remote training cluster you set up earlier. To submit a job you:\n",
|
"For this task, you submit the job to run on the remote training cluster you set up earlier. To submit a job you:\n",
|
||||||
"* Create a directory\n",
|
"* Create a directory\n",
|
||||||
"* Create a training script\n",
|
"* Create a training script\n",
|
||||||
"* Create an estimator object\n",
|
"* Create an estimator object\n",
|
||||||
|
|||||||
Reference in New Issue
Block a user