Compare commits

...

12 Commits

Author SHA1 Message Date
vizhur
d3f1212440 update samples from Release-43 as a part of SDK release 2020-03-23 23:39:45 +00:00
Harneet Virk
b95a65eef4 Merge pull request #883 from Azure/release_update_stablev2/Release-3
update samples from Release-3 as a part of 1.2.0 SDK stable release
2020-03-23 16:21:53 -07:00
vizhur
2218af619f update samples from Release-3 as a part of 1.2.0 SDK stable release 2020-03-23 23:11:53 +00:00
Harneet Virk
0401128638 Merge pull request #878 from Azure/release_update/Release-42
update samples from Release-42 as a part of  SDK release
2020-03-20 11:14:02 -07:00
vizhur
59fcb54998 update samples from Release-42 as a part of SDK release 2020-03-20 18:10:08 +00:00
Harneet Virk
e0ea99a6bb Merge pull request #862 from Azure/release_update/Release-41
update samples from Release-41 as a part of  SDK release
2020-03-13 14:57:58 -07:00
vizhur
b06f5ce269 update samples from Release-41 as a part of SDK release 2020-03-13 21:57:04 +00:00
Harneet Virk
ed0ce9e895 Merge pull request #856 from Azure/release_update/Release-40
update samples from Release-40 as a part of  SDK release
2020-03-12 12:28:18 -07:00
vizhur
71053d705b update samples from Release-40 as a part of SDK release 2020-03-12 19:25:26 +00:00
Harneet Virk
77f98bf75f Merge pull request #852 from Azure/release_update_stable/Release-6
update samples from Release-6 as a part of 1.1.5 SDK stable release
2020-03-11 15:37:59 -06:00
vizhur
e443fd1342 update samples from Release-6 as a part of 1.1.5rc0 SDK stable release 2020-03-11 19:51:02 +00:00
Harneet Virk
2165cf308e update samples from Release-25 as a part of 1.1.2rc0 SDK experimental release (#829)
Co-authored-by: vizhur <vizhur@live.com>
2020-03-02 15:42:04 -05:00
58 changed files with 1023 additions and 280 deletions

View File

@@ -13,7 +13,7 @@ Read more detailed instructions on [how to set up your environment](./NBSETUP.md
## How to navigate and use the example notebooks?
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, you should always run the [Configuration](./configuration.ipynb) notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.
This [index](.index.md) should assist in navigating the Azure Machine Learning notebook samples and encourage efficient retrieval of topics and content.
This [index](./index.md) should assist in navigating the Azure Machine Learning notebook samples and encourage efficient retrieval of topics and content.
If you want to...

View File

@@ -103,7 +103,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.1.1rc0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.2.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -144,7 +144,7 @@ jupyter notebook
- Dataset: forecasting for a bike-sharing
- Example of training an automated ML forecasting model on multiple time-series
- [automl-forecasting-function.ipynb](forecasting-high-frequency/automl-forecasting-function.ipynb)
- [auto-ml-forecasting-function.ipynb](forecasting-high-frequency/auto-ml-forecasting-function.ipynb)
- Example of training an automated ML forecasting model on multiple time-series
- [auto-ml-forecasting-beer-remote.ipynb](forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb)

View File

@@ -13,7 +13,7 @@ dependencies:
- scipy>=1.0.0,<=1.1.0
- scikit-learn>=0.19.0,<=0.20.3
- pandas>=0.22.0,<=0.23.4
- py-xgboost<=0.80
- py-xgboost<=0.90
- fbprophet==0.5
- pytorch=1.1.0
- cudatoolkit=9.0
@@ -21,6 +21,7 @@ dependencies:
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- azureml-dataprep[pandas]
- azureml-train-automl
- azureml-train
- azureml-widgets
@@ -28,10 +29,10 @@ dependencies:
- azureml-contrib-interpret
- pytorch-transformers==1.0.0
- spacy==2.1.8
- joblib
- onnxruntime==1.0.0
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
channels:
- anaconda
- conda-forge
- pytorch

View File

@@ -22,6 +22,7 @@ dependencies:
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- azureml-dataprep[pandas]
- azureml-train-automl
- azureml-train
- azureml-widgets
@@ -29,10 +30,10 @@ dependencies:
- azureml-contrib-interpret
- pytorch-transformers==1.0.0
- spacy==2.1.8
- joblib
- onnxruntime==1.0.0
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
channels:
- anaconda
- conda-forge
- pytorch

View File

@@ -320,7 +320,6 @@
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**training_data**|Input dataset, containing both features and label column.|\n",
"|**label_column_name**|The name of the label column.|\n",
"|**model_explainability**|Indicate to explain each trained pipeline or not.|\n",
"\n",
"**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)"
]
@@ -352,7 +351,6 @@
" training_data = train_data,\n",
" label_column_name = label,\n",
" validation_data = validation_dataset,\n",
" model_explainability=True,\n",
" **automl_settings\n",
" )"
]
@@ -500,11 +498,11 @@
"outputs": [],
"source": [
"# Wait for the best model explanation run to complete\n",
"from azureml.train.automl.run import AutoMLRun\n",
"from azureml.core.run import Run\n",
"model_explainability_run_id = remote_run.get_properties().get('ModelExplainRunId')\n",
"print(model_explainability_run_id)\n",
"if model_explainability_run_id is not None:\n",
" model_explainability_run = AutoMLRun(experiment=experiment, run_id=model_explainability_run_id)\n",
" model_explainability_run = Run(experiment=experiment, run_id=model_explainability_run_id)\n",
" model_explainability_run.wait_for_completion()\n",
"\n",
"# Get the best run object\n",

View File

@@ -5,7 +5,6 @@ dependencies:
- azureml-train-automl
- azureml-widgets
- matplotlib
- interpret
- onnxruntime==1.0.0
- azureml-explain-model
- azureml-contrib-interpret

View File

@@ -122,35 +122,22 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute\n",
"from azureml.core.compute import ComputeTarget\n",
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your AmlCompute cluster.\n",
"amlcompute_cluster_name = \"cpu-cluster-1\"\n",
"# Choose a name for your CPU cluster\n",
"cpu_cluster_name = \"cpu-cluster-1\"\n",
"\n",
"found = False\n",
"# Check if this compute target already exists in the workspace.\n",
"cts = ws.compute_targets\n",
"if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'cpu-cluster-1':\n",
" found = True\n",
" print('Found existing compute target.')\n",
" compute_target = cts[amlcompute_cluster_name]\n",
" \n",
"if not found:\n",
" print('Creating a new compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_DS12_V2\", # for GPU, use \"STANDARD_NC6\"\n",
" #vm_priority = 'lowpriority', # optional\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print('Found existing cluster, use it.')\n",
"except ComputeTargetException:\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n",
" max_nodes=6)\n",
" compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
" # Create the cluster.\n",
" compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n",
" \n",
"print('Checking cluster status...')\n",
"# Can poll for a minimum number of nodes and for a specific timeout.\n",
"# If no min_node_count is provided, it will use the scale settings for the cluster.\n",
"compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n",
"\n",
"# For a more detailed view of current AmlCompute status, use get_status()."
"compute_target.wait_for_completion(show_output=True)"
]
},
{

View File

@@ -5,5 +5,4 @@ dependencies:
- azureml-train-automl
- azureml-widgets
- matplotlib
- interpret
- azureml-explain-model

View File

@@ -343,7 +343,7 @@
"outputs": [],
"source": [
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.train.automl import AutoMLStep\n",
"from azureml.pipeline.steps import AutoMLStep\n",
"\n",
"automl_settings = {\n",
" \"iteration_timeout_minutes\": 10,\n",

View File

@@ -1,9 +1,9 @@
name: auto-ml-forecasting-beer-remote
dependencies:
- fbprophet==0.5
- py-xgboost<=0.80
- py-xgboost<=0.90
- pip:
- azureml-sdk
- numpy==1.16.2
- azureml-train-automl
- azureml-widgets
- matplotlib

View File

@@ -1,9 +1,9 @@
name: auto-ml-forecasting-bike-share
dependencies:
- fbprophet==0.5
- py-xgboost<=0.80
- py-xgboost<=0.90
- pip:
- azureml-sdk
- numpy==1.16.2
- azureml-train-automl
- azureml-widgets
- matplotlib

View File

@@ -2,9 +2,9 @@ name: auto-ml-forecasting-energy-demand
dependencies:
- pip:
- azureml-sdk
- numpy==1.16.2
- azureml-train-automl
- azureml-widgets
- matplotlib
- interpret
- azureml-explain-model
- azureml-contrib-interpret

View File

@@ -459,8 +459,8 @@
"# use forecast_quantiles function, not the forecast() one\n",
"y_pred_quantiles = fitted_model.forecast_quantiles(X_test)\n",
"\n",
"# it all nicely aligns column-wise\n",
"pd.concat([X_test.reset_index(), y_pred_quantiles], axis=1)"
"# quantile forecasts returned in a Dataframe along with the time and grain columns \n",
"y_pred_quantiles"
]
},
{
@@ -701,7 +701,7 @@
"metadata": {
"authors": [
{
"name": "erwright, nirovins"
"name": "erwright"
}
],
"category": "tutorial",

View File

@@ -1,9 +1,9 @@
name: automl-forecasting-function
name: auto-ml-forecasting-function
dependencies:
- fbprophet==0.5
- py-xgboost<=0.80
- py-xgboost<=0.90
- pip:
- azureml-sdk
- numpy==1.16.2
- azureml-train-automl
- azureml-widgets
- matplotlib

View File

@@ -1,9 +1,10 @@
name: auto-ml-forecasting-orange-juice-sales
dependencies:
- fbprophet==0.5
- py-xgboost<=0.80
- py-xgboost<=0.90
- pip:
- azureml-sdk
- numpy==1.16.2
- pandas==0.23.4
- azureml-train-automl
- azureml-widgets
- matplotlib

View File

@@ -49,7 +49,9 @@
"2. Configure AutoML using `AutoMLConfig`.\n",
"3. Train the model.\n",
"4. Explore the results.\n",
"5. Test the fitted model."
"5. Visualization model's feature importance in azure portal\n",
"6. Explore any model's explanation and explore feature importance in azure portal\n",
"7. Test the fitted model."
]
},
{
@@ -71,13 +73,13 @@
"\n",
"from matplotlib import pyplot as plt\n",
"import pandas as pd\n",
"import os\n",
"\n",
"import azureml.core\n",
"from azureml.core.experiment import Experiment\n",
"from azureml.core.workspace import Workspace\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.train.automl import AutoMLConfig"
"from azureml.train.automl import AutoMLConfig\n",
"from azureml.explain.model._internal.explanation_client import ExplanationClient"
]
},
{
@@ -262,6 +264,133 @@
"The fitted_model is a python object and you can read the different properties of the object.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Best Model 's explanation\n",
"Retrieve the explanation from the best_run which includes explanations for engineered features and raw features.\n",
"\n",
"#### Download engineered feature importance from artifact store\n",
"You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"client = ExplanationClient.from_run(best_run)\n",
"engineered_explanations = client.download_model_explanation(raw=False)\n",
"print(engineered_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explanations\n",
"In this section, we will show how to compute model explanations and visualize the explanations using azureml-explain-model package. Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Retrieve any other AutoML model from training"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_run, fitted_model = local_run.get_output(metric='accuracy')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Setup the model explanations for AutoML models\n",
"The fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-\n",
"\n",
"1. Featurized data from train samples/test samples\n",
"2. Gather engineered name lists\n",
"3. Find the classes in your labeled column in classification scenarios\n",
"\n",
"The automl_explainer_setup_obj contains all the structures from above list."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train = training_data.drop_columns(columns=[label_column_name])\n",
"y_train = training_data.keep_columns(columns=[label_column_name], validate=True)\n",
"X_test = validation_data.drop_columns(columns=[label_column_name])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations\n",
"\n",
"automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, \n",
" X_test=X_test, y=y_train, \n",
" task='classification')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Initialize the Mimic Explainer for feature importance\n",
"For explaining the AutoML models, use the MimicWrapper from azureml.explain.model package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a LightGBM model which acts as a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel\n",
"from azureml.explain.model.mimic_wrapper import MimicWrapper\n",
"explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, \n",
" init_dataset=automl_explainer_setup_obj.X_transform, run=automl_run,\n",
" features=automl_explainer_setup_obj.engineered_feature_names, \n",
" feature_maps=[automl_explainer_setup_obj.feature_map],\n",
" classes=automl_explainer_setup_obj.classes)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Use Mimic Explainer for computing and visualizing engineered feature importance\n",
"The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform)\n",
"print(engineered_explanations.get_feature_importance_dict())\n",
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -358,7 +487,7 @@
"metadata": {
"authors": [
{
"name": "tzvikei"
"name": "anumamah"
}
],
"category": "tutorial",

View File

@@ -5,5 +5,4 @@ dependencies:
- azureml-train-automl
- azureml-widgets
- matplotlib
- interpret
- azureml-explain-model

View File

@@ -51,8 +51,8 @@
"4. Explore the results and featurization transparency options\n",
"5. Setup remote compute for computing the model explanations for a given AutoML model.\n",
"6. Start an AzureML experiment on your remote compute to compute explanations for an AutoML model.\n",
"7. Download the feature importance for engineered features and visualize the explanations for engineered features. \n",
"8. Download the feature importance for raw features and visualize the explanations for raw features. \n"
"7. Download the feature importance for engineered features and visualize the explanations for engineered features on azure portal. \n",
"8. Download the feature importance for raw features and visualize the explanations for raw features on azure portal. \n"
]
},
{
@@ -514,7 +514,7 @@
" content = cefr.read()\n",
"\n",
"# Replace the values in train_explainer.py file with the appropriate values\n",
"content = content.replace('<<experimnet_name>>', automl_run.experiment.name) # your experiment name.\n",
"content = content.replace('<<experiment_name>>', automl_run.experiment.name) # your experiment name.\n",
"content = content.replace('<<run_id>>', automl_run.id) # Run-id of the AutoML run for which you want to explain the model.\n",
"content = content.replace('<<target_column_name>>', 'ERP') # Your target column name\n",
"content = content.replace('<<task>>', 'regression') # Training task type\n",
@@ -532,8 +532,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Create conda configuration for model explanations experiment\n",
"We need `azureml-explain-model`, `azureml-train-automl` and `azureml-core` packages for computing model explanations for your AutoML model on remote compute."
"#### Create conda configuration for model explanations experiment from automl_run object"
]
},
{
@@ -552,13 +551,9 @@
"# Set compute target to AmlCompute\n",
"conda_run_config.target = compute_target\n",
"conda_run_config.environment.docker.enabled = True\n",
"azureml_pip_packages = [\n",
" 'azureml-train-automl', 'azureml-core', 'azureml-explain-model'\n",
"]\n",
"\n",
"# specify CondaDependencies obj\n",
"conda_run_config.environment.python.conda_dependencies = CondaDependencies.create(\n",
" pip_packages=azureml_pip_packages)"
"conda_run_config.environment.python.conda_dependencies = automl_run.get_environment().python.conda_dependencies"
]
},
{
@@ -603,38 +598,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Feature importance and explanation dashboard\n",
"In this section we describe how you can download the explanation results from the explanations experiment and visualize the feature importance for your AutoML model. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Setup for visualizing the model explanation results\n",
"For visualizing the explanation results for the *fitted_model* we need to perform the following steps:-\n",
"1. Featurize test data samples.\n",
"\n",
"The *automl_explainer_setup_obj* contains all the structures from above list. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_test = test_data.drop_columns([label]).to_pandas_dataframe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations\n",
"explainer_setup_class = automl_setup_model_explanations(fitted_model, 'regression', X_test=X_test)"
"### Feature importance and visualizing explanation dashboard\n",
"In this section we describe how you can download the explanation results from the explanations experiment and visualize the feature importance for your AutoML model on the azure portal."
]
},
{
@@ -642,7 +607,7 @@
"metadata": {},
"source": [
"#### Download engineered feature importance from artifact store\n",
"You can use *ExplanationClient* to download the engineered feature explanations from the artifact store of the *automl_run*. You can also use ExplanationDashboard to view the dash board visualization of the feature importance values of the engineered features."
"You can use *ExplanationClient* to download the engineered feature explanations from the artifact store of the *automl_run*. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features."
]
},
{
@@ -652,11 +617,10 @@
"outputs": [],
"source": [
"from azureml.explain.model._internal.explanation_client import ExplanationClient\n",
"from interpret_community.widget import ExplanationDashboard\n",
"client = ExplanationClient.from_run(automl_run)\n",
"engineered_explanations = client.download_model_explanation(raw=False)\n",
"print(engineered_explanations.get_feature_importance_dict())\n",
"ExplanationDashboard(engineered_explanations, explainer_setup_class.automl_estimator, datasetX=explainer_setup_class.X_test_transform)"
"print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
]
},
{
@@ -664,7 +628,7 @@
"metadata": {},
"source": [
"#### Download raw feature importance from artifact store\n",
"You can use *ExplanationClient* to download the raw feature explanations from the artifact store of the *automl_run*. You can also use ExplanationDashboard to view the dash board visualization of the feature importance values of the raw features."
"You can use *ExplanationClient* to download the raw feature explanations from the artifact store of the *automl_run*. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features."
]
},
{
@@ -675,7 +639,7 @@
"source": [
"raw_explanations = client.download_model_explanation(raw=True)\n",
"print(raw_explanations.get_feature_importance_dict())\n",
"ExplanationDashboard(raw_explanations, explainer_setup_class.automl_pipeline, datasetX=explainer_setup_class.X_test_raw)"
"print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())"
]
},
{
@@ -808,6 +772,7 @@
"outputs": [],
"source": [
"if service.state == 'Healthy':\n",
" X_test = test_data.drop_columns([label]).to_pandas_dataframe()\n",
" # Serialize the first row of the test data into json\n",
" X_test_json = X_test[:1].to_json(orient='records')\n",
" print(X_test_json)\n",

View File

@@ -5,7 +5,6 @@ dependencies:
- azureml-train-automl
- azureml-widgets
- matplotlib
- interpret
- azureml-explain-model
- azureml-explain-model
- azureml-contrib-interpret

View File

@@ -22,7 +22,7 @@ run = Run.get_context()
ws = run.experiment.workspace
# Get the AutoML run object from the experiment name and the workspace
experiment = Experiment(ws, '<<experimnet_name>>')
experiment = Experiment(ws, '<<experiment_name>>')
automl_run = Run(experiment=experiment, run_id='<<run_id>>')
# Check if this AutoML model is explainable

View File

@@ -2,6 +2,7 @@ name: auto-ml-regression
dependencies:
- pip:
- azureml-sdk
- pandas==0.23.4
- azureml-train-automl
- azureml-widgets
- matplotlib

View File

@@ -341,9 +341,6 @@
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"\n",
"input_payload = json.dumps({\n",
" 'data': [\n",
" [ 0.03807591, 0.05068012, 0.06169621, 0.02187235, -0.0442235,\n",
@@ -376,16 +373,101 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Model profiling\n",
"### Model Profiling\n",
"\n",
"You can also take advantage of the profiling feature to estimate CPU and memory requirements for models.\n",
"Profile your model to understand how much CPU and memory the service, created as a result of its deployment, will need. Profiling returns information such as CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage. You can profile your model (or more precisely the service built based on your model) on any CPU and/or memory combination where 0.1 <= CPU <= 3.5 and 0.1GB <= memory <= 15GB. If you do not provide a CPU and/or memory requirement, we will test it on the default configuration of 3.5 CPU and 15GB memory.\n",
"\n",
"```python\n",
"profile = Model.profile(ws, \"profilename\", [model], inference_config, test_sample)\n",
"profile.wait_for_profiling(True)\n",
"profiling_results = profile.get_results()\n",
"print(profiling_results)\n",
"```"
"In order to profile your model you will need:\n",
"- a registered model\n",
"- an entry script\n",
"- an inference configuration\n",
"- a single column tabular dataset, where each row contains a string representing sample request data sent to the service.\n",
"\n",
"At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.\n",
"\n",
"Below is an example of how you can construct an input dataset to profile a service which expects its incoming requests to contain serialized json. In this case we created a dataset based one hundred instances of the same request data. In real world scenarios however, we suggest that you use larger datasets with various inputs, especially if your model resource usage/behavior is input dependent."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Datastore\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.data import dataset_type_definitions\n",
"\n",
"\n",
"# create a string that can be utf-8 encoded and\n",
"# put in the body of the request\n",
"serialized_input_json = json.dumps({\n",
" 'data': [\n",
" [ 0.03807591, 0.05068012, 0.06169621, 0.02187235, -0.0442235,\n",
" -0.03482076, -0.04340085, -0.00259226, 0.01990842, -0.01764613]\n",
" ]\n",
"})\n",
"dataset_content = []\n",
"for i in range(100):\n",
" dataset_content.append(serialized_input_json)\n",
"dataset_content = '\\n'.join(dataset_content)\n",
"file_name = 'sample_request_data.txt'\n",
"f = open(file_name, 'w')\n",
"f.write(dataset_content)\n",
"f.close()\n",
"\n",
"# upload the txt file created above to the Datastore and create a dataset from it\n",
"data_store = Datastore.get_default(ws)\n",
"data_store.upload_files(['./' + file_name], target_path='sample_request_data')\n",
"datastore_path = [(data_store, 'sample_request_data' +'/' + file_name)]\n",
"sample_request_data = Dataset.Tabular.from_delimited_files(\n",
" datastore_path,\n",
" separator='\\n',\n",
" infer_column_types=True,\n",
" header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)\n",
"sample_request_data = sample_request_data.register(workspace=ws,\n",
" name='diabetes_sample_request_data',\n",
" create_new_version=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have an input dataset we are ready to go ahead with profiling. In this case we are testing the previously introduced sklearn regression model on 1 CPU and 0.5 GB memory. The memory usage and recommendation presented in the result is measured in Gigabytes. The CPU usage and recommendation is measured in CPU cores."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"\n",
"\n",
"environment = Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'joblib',\n",
" 'numpy',\n",
" 'scikit-learn'\n",
"])\n",
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
"# if cpu and memory_in_gb parameters are not provided\n",
"# the model will be profiled on default configuration of\n",
"# 3.5CPU and 15GB memory\n",
"profile = Model.profile(ws,\n",
" 'rgrsn-%s' % datetime.now().strftime('%m%d%Y-%H%M%S'),\n",
" [model],\n",
" inference_config,\n",
" input_dataset=sample_request_data,\n",
" cpu=1.0,\n",
" memory_in_gb=0.5)\n",
"\n",
"profile.wait_for_completion(True)\n",
"details = profile.get_details()"
]
},
{

View File

@@ -145,6 +145,110 @@
" environment=environment)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model Profiling\n",
"\n",
"Profile your model to understand how much CPU and memory the service, created as a result of its deployment, will need. Profiling returns information such as CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage. You can profile your model (or more precisely the service built based on your model) on any CPU and/or memory combination where 0.1 <= CPU <= 3.5 and 0.1GB <= memory <= 15GB. If you do not provide a CPU and/or memory requirement, we will test it on the default configuration of 3.5 CPU and 15GB memory.\n",
"\n",
"In order to profile your model you will need:\n",
"- a registered model\n",
"- an entry script\n",
"- an inference configuration\n",
"- a single column tabular dataset, where each row contains a string representing sample request data sent to the service.\n",
"\n",
"At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.\n",
"\n",
"Below is an example of how you can construct an input dataset to profile a service which expects its incoming requests to contain serialized json. In this case we created a dataset based one hundred instances of the same request data. In real world scenarios however, we suggest that you use larger datasets with various inputs, especially if your model resource usage/behavior is input dependent."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"from azureml.core import Datastore\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.data import dataset_type_definitions\n",
"\n",
"\n",
"# create a string that can be put in the body of the request\n",
"serialized_input_json = json.dumps({\n",
" 'data': [\n",
" [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n",
" [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]\n",
" ]\n",
"})\n",
"dataset_content = []\n",
"for i in range(100):\n",
" dataset_content.append(serialized_input_json)\n",
"dataset_content = '\\n'.join(dataset_content)\n",
"file_name = 'sample_request_data_diabetes.txt'\n",
"f = open(file_name, 'w')\n",
"f.write(dataset_content)\n",
"f.close()\n",
"\n",
"# upload the txt file created above to the Datastore and create a dataset from it\n",
"data_store = Datastore.get_default(ws)\n",
"data_store.upload_files(['./' + file_name], target_path='sample_request_data_diabetes')\n",
"datastore_path = [(data_store, 'sample_request_data_diabetes' +'/' + file_name)]\n",
"sample_request_data_diabetes = Dataset.Tabular.from_delimited_files(\n",
" datastore_path,\n",
" separator='\\n',\n",
" infer_column_types=True,\n",
" header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)\n",
"sample_request_data_diabetes = sample_request_data_diabetes.register(workspace=ws,\n",
" name='sample_request_data_diabetes',\n",
" create_new_version=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have an input dataset we are ready to go ahead with profiling. In this case we are testing the previously introduced sklearn regression model on 1 CPU and 0.5 GB memory. The memory usage and recommendation presented in the result is measured in Gigabytes. The CPU usage and recommendation is measured in CPU cores."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"from azureml.core import Environment\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.model import Model, InferenceConfig\n",
"\n",
"\n",
"environment = Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'joblib',\n",
" 'numpy',\n",
" 'scikit-learn'\n",
"])\n",
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
"# if cpu and memory_in_gb parameters are not provided\n",
"# the model will be profiled on default configuration of\n",
"# 3.5CPU and 15GB memory\n",
"profile = Model.profile(ws,\n",
" 'profile-%s' % datetime.now().strftime('%m%d%Y-%H%M%S'),\n",
" [model],\n",
" inference_config,\n",
" input_dataset=sample_request_data_diabetes,\n",
" cpu=1.0,\n",
" memory_in_gb=0.5)\n",
"\n",
"profile.wait_for_completion(True)\n",
"details = profile.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -0,0 +1,314 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploying a web service to Azure Kubernetes Service (AKS)\n",
"This notebook shows the steps for deploying a service: registering a model, creating an image, provisioning a cluster (one time action), and deploying a service to it. \n",
"We then test and delete the service, image and model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import azureml.core\n",
"print(azureml.core.VERSION)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Get workspace\n",
"Load existing workspace from the config file info."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.workspace import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Register the model\n",
"Register an existing trained model, add descirption and tags. Prior to registering the model, you should have a TensorFlow [Saved Model](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md) in the `resnet50` directory. You can download a [pretrained resnet50](http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz) and unpack it to that directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Register the model\n",
"from azureml.core.model import Model\n",
"model = Model.register(model_path = \"resnet50\", # this points to a local file\n",
" model_name = \"resnet50\", # this is the name the model is registered as\n",
" tags = {'area': \"Image classification\", 'type': \"classification\"},\n",
" description = \"Image classification trained on Imagenet Dataset\",\n",
" workspace = ws)\n",
"\n",
"print(model.name, model.description, model.version)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Provision the AKS Cluster\n",
"This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AksCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# Choose a name for your GPU cluster\n",
"gpu_cluster_name = \"aks-gpu-cluster\"\n",
"\n",
"# Verify that cluster does not exist already\n",
"try:\n",
" gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)\n",
" print(\"Found existing gpu cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new gpu-cluster\")\n",
" \n",
" # Specify the configuration for the new cluster\n",
" compute_config = AksCompute.provisioning_configuration(cluster_purpose=AksCompute.ClusterPurpose.DEV_TEST,\n",
" agent_count=1,\n",
" vm_size=\"Standard_NV6\")\n",
" # Create the cluster with the specified name and configuration\n",
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)\n",
"\n",
" # Wait for the cluster to complete, show the output log\n",
" gpu_cluster.wait_for_completion(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Deploy the model as a web service to AKS\n",
"\n",
"First create a scoring script"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile score.py\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"import json\n",
"import os\n",
"from azureml.contrib.services.aml_request import AMLRequest, rawhttp\n",
"from azureml.contrib.services.aml_response import AMLResponse\n",
"\n",
"def init():\n",
" global session\n",
" global input_name\n",
" global output_name\n",
" \n",
" session = tf.Session()\n",
"\n",
" # AZUREML_MODEL_DIR is an environment variable created during deployment.\n",
" # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)\n",
" # For multiple models, it points to the folder containing all deployed models (./azureml-models)\n",
" model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'resnet50')\n",
" model = tf.saved_model.loader.load(session, ['serve'], model_path)\n",
" if len(model.signature_def['serving_default'].inputs) > 1:\n",
" raise ValueError(\"This score.py only supports one input\")\n",
" input_name = [tensor.name for tensor in model.signature_def['serving_default'].inputs.values()][0]\n",
" output_name = [tensor.name for tensor in model.signature_def['serving_default'].outputs.values()]\n",
" \n",
"\n",
"@rawhttp\n",
"def run(request):\n",
" if request.method == 'POST':\n",
" reqBody = request.get_data(False)\n",
" resp = score(reqBody)\n",
" return AMLResponse(resp, 200)\n",
" if request.method == 'GET':\n",
" respBody = str.encode(\"GET is not supported\")\n",
" return AMLResponse(respBody, 405)\n",
" return AMLResponse(\"bad request\", 500)\n",
"\n",
"def score(data):\n",
" result = session.run(output_name, {input_name: [data]})\n",
" return json.dumps(result[1].tolist())\n",
"\n",
"if __name__ == \"__main__\":\n",
" init()\n",
" with open(\"test_image.jpg\", 'rb') as f:\n",
" content = f.read()\n",
" print(score(content))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now create the deployment configuration objects and deploy the model as a webservice."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set the web service configuration (using default here)\n",
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.webservice import AksWebservice\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE\n",
"\n",
"env = Environment('deploytocloudenv')\n",
"# Please see [Azure ML Containers repository](https://github.com/Azure/AzureML-Containers#featured-tags)\n",
"# for open-sourced GPU base images.\n",
"env.docker.base_image = DEFAULT_GPU_IMAGE\n",
"env.python.conda_dependencies = CondaDependencies.create(conda_packages=['tensorflow-gpu==1.12.0','numpy'],\n",
" pip_packages=['azureml-contrib-services', 'azureml-defaults'])\n",
"\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=env)\n",
"aks_config = AksWebservice.deploy_configuration()\n",
"\n",
"# # Enable token auth and disable (key) auth on the webservice\n",
"# aks_config = AksWebservice.deploy_configuration(token_auth_enabled=True, auth_enabled=False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service_name ='gpu-rn50'\n",
"\n",
"aks_service = Model.deploy(workspace=ws,\n",
" name=aks_service_name,\n",
" models=[model],\n",
" inference_config=inference_config,\n",
" deployment_config=aks_config,\n",
" deployment_target=gpu_cluster)\n",
"\n",
"aks_service.wait_for_deployment(show_output = True)\n",
"print(aks_service.state)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Test the web service\n",
"We test the web sevice by passing the test images content."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"import requests\n",
"\n",
"# if (key) auth is enabled, fetch keys and include in the request\n",
"key1, key2 = aks_service.get_keys()\n",
"\n",
"headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + key1}\n",
"\n",
"# # if token auth is enabled, fetch token and include in the request\n",
"# access_token, fetch_after = aks_service.get_token()\n",
"# headers = {'Content-Type':'application/json', 'Authorization': 'Bearer ' + access_token}\n",
"\n",
"test_sample = open('snowleopardgaze.jpg', 'rb').read()\n",
"resp = requests.post(aks_service.scoring_uri, test_sample, headers=headers)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Clean up\n",
"Delete the service, image, model and compute target"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"aks_service.delete()\n",
"model.delete()\n",
"gpu_cluster.delete()\n"
]
}
],
"metadata": {
"authors": [
{
"name": "aashishb"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,5 @@
name: production-deploy-to-aks-gpu
dependencies:
- pip:
- azureml-sdk
- tensorflow

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

View File

@@ -198,6 +198,106 @@
"inf_config = InferenceConfig(entry_script='score.py', environment=myenv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Model Profiling\n",
"\n",
"Profile your model to understand how much CPU and memory the service, created as a result of its deployment, will need. Profiling returns information such as CPU usage, memory usage, and response latency. It also provides a CPU and memory recommendation based on the resource usage. You can profile your model (or more precisely the service built based on your model) on any CPU and/or memory combination where 0.1 <= CPU <= 3.5 and 0.1GB <= memory <= 15GB. If you do not provide a CPU and/or memory requirement, we will test it on the default configuration of 3.5 CPU and 15GB memory.\n",
"\n",
"In order to profile your model you will need:\n",
"- a registered model\n",
"- an entry script\n",
"- an inference configuration\n",
"- a single column tabular dataset, where each row contains a string representing sample request data sent to the service.\n",
"\n",
"At this point we only support profiling of services that expect their request data to be a string, for example: string serialized json, text, string serialized image, etc. The content of each row of the dataset (string) will be put into the body of the HTTP request and sent to the service encapsulating the model for scoring.\n",
"\n",
"Below is an example of how you can construct an input dataset to profile a service which expects its incoming requests to contain serialized json. In this case we created a dataset based one hundred instances of the same request data. In real world scenarios however, we suggest that you use larger datasets with various inputs, especially if your model resource usage/behavior is input dependent."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"from azureml.core import Datastore\n",
"from azureml.core.dataset import Dataset\n",
"from azureml.data import dataset_type_definitions\n",
"\n",
"input_json = {'data': [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n",
" [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]]}\n",
"# create a string that can be put in the body of the request\n",
"serialized_input_json = json.dumps(input_json)\n",
"dataset_content = []\n",
"for i in range(100):\n",
" dataset_content.append(serialized_input_json)\n",
"sample_request_data = '\\n'.join(dataset_content)\n",
"file_name = 'sample_request_data.txt'\n",
"f = open(file_name, 'w')\n",
"f.write(sample_request_data)\n",
"f.close()\n",
"\n",
"# upload the txt file created above to the Datastore and create a dataset from it\n",
"data_store = Datastore.get_default(ws)\n",
"data_store.upload_files(['./' + file_name], target_path='sample_request_data')\n",
"datastore_path = [(data_store, 'sample_request_data' +'/' + file_name)]\n",
"sample_request_data = Dataset.Tabular.from_delimited_files(\n",
" datastore_path,\n",
" separator='\\n',\n",
" infer_column_types=True,\n",
" header=dataset_type_definitions.PromoteHeadersBehavior.NO_HEADERS)\n",
"sample_request_data = sample_request_data.register(workspace=ws,\n",
" name='sample_request_data',\n",
" create_new_version=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have an input dataset we are ready to go ahead with profiling. In this case we are testing the previously introduced sklearn regression model on 1 CPU and 0.5 GB memory. The memory usage and recommendation presented in the result is measured in Gigabytes. The CPU usage and recommendation is measured in CPU cores."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import datetime\n",
"from azureml.core import Environment\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.model import Model, InferenceConfig\n",
"\n",
"\n",
"environment = Environment('my-sklearn-environment')\n",
"environment.python.conda_dependencies = CondaDependencies.create(pip_packages=[\n",
" 'azureml-defaults',\n",
" 'inference-schema[numpy-support]',\n",
" 'joblib',\n",
" 'numpy',\n",
" 'scikit-learn'\n",
"])\n",
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
"# if cpu and memory_in_gb parameters are not provided\n",
"# the model will be profiled on default configuration of\n",
"# 3.5CPU and 15GB memory\n",
"profile = Model.profile(ws,\n",
" 'sklearn-%s' % datetime.now().strftime('%m%d%Y-%H%M%S'),\n",
" [model],\n",
" inference_config,\n",
" input_dataset=sample_request_data,\n",
" cpu=1.0,\n",
" memory_in_gb=0.5)\n",
"\n",
"profile.wait_for_completion(True)\n",
"details = profile.get_details()"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -2,7 +2,6 @@ name: explain-model-on-amlcompute
dependencies:
- pip:
- azureml-sdk
- interpret
- azureml-interpret
- azureml-contrib-interpret
- sklearn-pandas

View File

@@ -2,7 +2,6 @@ name: save-retrieve-explanations-run-history
dependencies:
- pip:
- azureml-sdk
- interpret
- azureml-interpret
- azureml-contrib-interpret
- ipywidgets

View File

@@ -2,7 +2,6 @@ name: train-explain-model-locally-and-deploy
dependencies:
- pip:
- azureml-sdk
- interpret
- azureml-interpret
- azureml-contrib-interpret
- sklearn-pandas

View File

@@ -2,7 +2,6 @@ name: train-explain-model-on-amlcompute-and-deploy
dependencies:
- pip:
- azureml-sdk
- interpret
- azureml-interpret
- azureml-contrib-interpret
- sklearn-pandas

View File

@@ -76,7 +76,7 @@
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"from azureml.train.automl.runtime import AutoMLStep\n",
"from azureml.pipeline.steps import AutoMLStep\n",
"\n",
"# Check core SDK version number\n",
"print(\"SDK version:\", azureml.core.VERSION)"
@@ -173,12 +173,7 @@
"source": [
"# create a new RunConfig object\n",
"conda_run_config = RunConfiguration(framework=\"python\")\n",
"\n",
"conda_run_config.environment.docker.enabled = True\n",
"conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE\n",
"\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], \n",
" conda_packages=['numpy', 'py-xgboost<=0.80'])\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'])\n",
"conda_run_config.environment.python.conda_dependencies = cd\n",
"\n",
"print('run config is ready')"

View File

@@ -507,7 +507,7 @@
"metadata": {},
"source": [
"### Create myenv.yml\n",
"We also need to create an environment file so that Azure Machine Learning can install the necessary packages in the Docker image which are required by your scoring script. In this case, we need to specify conda packages `numpy` and `chainer`. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
"We also need to create an environment file so that Azure Machine Learning can install the necessary packages in the Docker image which are required by your scoring script. In this case, we need to specify conda package `numpy` and pip install `chainer`. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
@@ -520,7 +520,7 @@
"\n",
"cd = CondaDependencies.create()\n",
"cd.add_conda_package('numpy')\n",
"cd.add_conda_package('chainer')\n",
"cd.add_pip_package('chainer==5.1.0')\n",
"cd.add_pip_package(\"azureml-defaults\")\n",
"cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')\n",
"\n",

View File

@@ -161,7 +161,7 @@
},
"source": [
"## Download MNIST dataset\n",
"In order to train on the MNIST dataset we will first need to download it from Yan LeCun's web site directly and save them in a `data` folder locally."
"In order to train on the MNIST dataset we will first need to download it from azuremlopendatasets blob directly and save them in a `data` folder locally. If you want you can directly download the same data from Yan LeCun's web site."
]
},
{
@@ -171,13 +171,17 @@
"outputs": [],
"source": [
"import urllib\n",
"data_folder = 'data'\n",
"os.makedirs(data_folder, exist_ok=True)\n",
"\n",
"os.makedirs('./data/mnist', exist_ok=True)\n",
"\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename = './data/mnist/train-images.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename = './data/mnist/train-labels.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename = './data/mnist/test-images.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename = './data/mnist/test-labels.gz')"
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/train-images-idx3-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'train-images.gz'))\n",
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/train-labels-idx1-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'train-labels.gz'))\n",
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/t10k-images-idx3-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'test-images.gz'))\n",
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/t10k-labels-idx1-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'test-labels.gz'))"
]
},
{
@@ -205,11 +209,11 @@
"from utils import load_data\n",
"\n",
"# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster.\n",
"X_train = load_data('./data/mnist/train-images.gz', False) / 255.0\n",
"y_train = load_data('./data/mnist/train-labels.gz', True).reshape(-1)\n",
"X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0\n",
"y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)\n",
"\n",
"X_test = load_data('./data/mnist/test-images.gz', False) / 255.0\n",
"y_test = load_data('./data/mnist/test-labels.gz', True).reshape(-1)\n",
"X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0\n",
"y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)\n",
"\n",
"count = 0\n",
"sample_size = 30\n",
@@ -239,10 +243,10 @@
"outputs": [],
"source": [
"from azureml.core.dataset import Dataset\n",
"web_paths = ['http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',\n",
" 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',\n",
" 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',\n",
" 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'\n",
"web_paths = ['https://azureopendatastorage.blob.core.windows.net/mnist/train-images-idx3-ubyte.gz',\n",
" 'https://azureopendatastorage.blob.core.windows.net/mnist/train-labels-idx1-ubyte.gz',\n",
" 'https://azureopendatastorage.blob.core.windows.net/mnist/t10k-images-idx3-ubyte.gz',\n",
" 'https://azureopendatastorage.blob.core.windows.net/mnist/t10k-labels-idx1-ubyte.gz'\n",
" ]\n",
"dataset = Dataset.File.from_files(path = web_paths)"
]
@@ -945,7 +949,7 @@
"\n",
"cd = CondaDependencies.create()\n",
"cd.add_conda_package('numpy')\n",
"cd.add_tensorflow_conda_package()\n",
"cd.add_pip_package('tensorflow==1.13.1')\n",
"cd.add_pip_package(\"azureml-defaults\")\n",
"cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')\n",
"\n",
@@ -968,7 +972,6 @@
"source": [
"from azureml.core.webservice import AciWebservice\n",
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.webservice import Webservice\n",
"from azureml.core.model import Model\n",
"from azureml.core.environment import Environment\n",
"\n",

View File

@@ -171,13 +171,17 @@
"outputs": [],
"source": [
"import urllib\n",
"data_folder = 'data'\n",
"os.makedirs(data_folder, exist_ok=True)\n",
"\n",
"os.makedirs('./data/mnist', exist_ok=True)\n",
"\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename = './data/mnist/train-images.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename = './data/mnist/train-labels.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename = './data/mnist/test-images.gz')\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename = './data/mnist/test-labels.gz')"
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/train-images-idx3-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'train-images.gz'))\n",
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/train-labels-idx1-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'train-labels.gz'))\n",
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/t10k-images-idx3-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'test-images.gz'))\n",
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/t10k-labels-idx1-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'test-labels.gz'))"
]
},
{
@@ -204,13 +208,13 @@
"source": [
"from utils import load_data\n",
"\n",
"# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster.\n",
"X_train = load_data('./data/mnist/train-images.gz', False) / 255.0\n",
"y_train = load_data('./data/mnist/train-labels.gz', True).reshape(-1)\n",
"\n",
"X_test = load_data('./data/mnist/test-images.gz', False) / 255.0\n",
"y_test = load_data('./data/mnist/test-labels.gz', True).reshape(-1)\n",
"# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.\n",
"X_train = load_data(os.path.join(data_folder, 'train-images.gz'), False) / 255.0\n",
"X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0\n",
"y_train = load_data(os.path.join(data_folder, 'train-labels.gz'), True).reshape(-1)\n",
"y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)\n",
"\n",
"# now let's show some randomly chosen images from the training set.\n",
"count = 0\n",
"sample_size = 30\n",
"plt.figure(figsize = (16, 6))\n",

View File

@@ -100,7 +100,7 @@
"\n",
"# Check core SDK version number\n",
"\n",
"print(\"This notebook was created using SDK version 1.1.1rc0, you are currently running version\", azureml.core.VERSION)"
"print(\"This notebook was created using SDK version 1.2.0, you are currently running version\", azureml.core.VERSION)"
]
},
{

View File

@@ -145,9 +145,12 @@
"import requests\n",
"import os\n",
"\n",
"tf_code = requests.get(\"https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py\")\n",
"tf_code = requests.get(\"https://raw.githubusercontent.com/tensorflow/tensorflow/r2.1/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py\")\n",
"input_code = requests.get(\"https://raw.githubusercontent.com/tensorflow/tensorflow/r2.1/tensorflow/examples/tutorials/mnist/input_data.py\")\n",
"with open(os.path.join(exp_dir, \"mnist_with_summaries.py\"), \"w\") as file:\n",
" file.write(tf_code.text)"
" file.write(tf_code.text.replace(\"from tensorflow.examples.tutorials.mnist import input_data\", \"import input_data\"))\n",
"with open(os.path.join(exp_dir, \"input_data.py\"), \"w\") as file:\n",
" file.write(input_code.text)"
]
},
{
@@ -186,7 +189,7 @@
"from azureml.core import Experiment\n",
"from azureml.core.script_run_config import ScriptRunConfig\n",
"\n",
"logs_dir = os.path.join(os.curdir, \"logs\")\n",
"logs_dir = os.path.join(os.curdir, os.path.join(\"logs\", \"tb-logs\"))\n",
"data_dir = os.path.abspath(os.path.join(os.curdir, \"mnist_data\"))\n",
"\n",
"if not path.exists(data_dir):\n",
@@ -334,7 +337,8 @@
"tf_estimator = TensorFlow(source_directory=exp_dir,\n",
" compute_target=attached_dsvm_compute,\n",
" entry_script='mnist_with_summaries.py',\n",
" script_params=script_params)\n",
" script_params=script_params,\n",
" framework_version=\"2.0\")\n",
"\n",
"run = exp.submit(tf_estimator)\n",
"\n",
@@ -396,10 +400,9 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"\n",
"from azureml.core.compute import AmlCompute\n",
"# choose a name for your cluster\n",
"cluster_name = \"cpucluster\"\n",
"cluster_name = \"cpu-cluster\"\n",
"\n",
"cts = ws.compute_targets\n",
"found = False\n",
@@ -444,7 +447,8 @@
"tf_estimator = TensorFlow(source_directory=exp_dir,\n",
" compute_target=compute_target,\n",
" entry_script='mnist_with_summaries.py',\n",
" script_params=script_params)\n",
" script_params=script_params,\n",
" framework_version=\"2.0\")\n",
"\n",
"run = exp.submit(tf_estimator)\n",
"\n",
@@ -539,6 +543,24 @@
"name": "roastala"
}
],
"category": "training",
"compute": [
"Local",
"DSVM",
"AML Compute"
],
"datasets": [
"None"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"framework": [
"TensorFlow"
],
"friendly_name": "Tensorboard integration with run history",
"index_order": 3,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -556,28 +578,10 @@
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"friendly_name": "Tensorboard integration with run history",
"exclude_from_index": false,
"index_order": 3,
"category": "training",
"task": "Run a TensorFlow job and view its Tensorboard output live",
"datasets": [
"None"
],
"compute": [
"Local",
"DSVM",
"AML Compute"
],
"deployment": [
"None"
],
"framework": [
"TensorFlow"
],
"tags": [
"None"
]
],
"task": "Run a TensorFlow job and view its Tensorboard output live"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -3,4 +3,5 @@ dependencies:
- pip:
- azureml-sdk
- azureml-tensorboard
- tensorflow<1.15
- tensorflow
- setuptools>=41.0.0

View File

@@ -3,7 +3,8 @@ dependencies:
- pip:
- azureml-sdk
- azureml-tensorboard
- tensorflow<1.15.0
- tensorflow
- tqdm
- scipy
- sklearn
- setuptools>=41.0.0

View File

@@ -157,10 +157,14 @@
"data_folder = os.path.join(os.getcwd(), 'data')\n",
"os.makedirs(data_folder, exist_ok=True)\n",
"\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'train-images.gz'))\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'train-labels.gz'))\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'test-images.gz'))\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'test-labels.gz'))"
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/train-images-idx3-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'train-images.gz'))\n",
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/train-labels-idx1-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'train-labels.gz'))\n",
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/t10k-images-idx3-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'test-images.gz'))\n",
"urllib.request.urlretrieve('https://azureopendatastorage.blob.core.windows.net/mnist/t10k-labels-idx1-ubyte.gz',\n",
" filename=os.path.join(data_folder, 'test-labels.gz'))"
]
},
{
@@ -227,12 +231,10 @@
"outputs": [],
"source": [
"from azureml.core.dataset import Dataset\n",
"\n",
"web_paths = [\n",
" 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',\n",
" 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',\n",
" 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',\n",
" 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'\n",
"web_paths = ['https://azureopendatastorage.blob.core.windows.net/mnist/train-images-idx3-ubyte.gz',\n",
" 'https://azureopendatastorage.blob.core.windows.net/mnist/train-labels-idx1-ubyte.gz',\n",
" 'https://azureopendatastorage.blob.core.windows.net/mnist/t10k-images-idx3-ubyte.gz',\n",
" 'https://azureopendatastorage.blob.core.windows.net/mnist/t10k-labels-idx1-ubyte.gz'\n",
" ]\n",
"dataset = Dataset.File.from_files(path = web_paths)"
]

View File

@@ -149,6 +149,20 @@
" ssh_port=22, \n",
" username=os.environ.get('hdiusername', '<ssh_username>'), \n",
" password=os.environ.get('hdipassword', '<my_password>'))\n",
"\n",
"# The following Azure regions do not support attaching a HDI Cluster using the public IP address of the HDI Cluster.\n",
"# Instead, use the Azure Resource Manager ID of the HDI Cluster with the resource_id parameter:\n",
"# US East\n",
"# US West 2\n",
"# US South Central\n",
"# The resource ID of the HDI Cluster can be constructed using the\n",
"# subscription ID, resource group name, and cluster name using the following string format:\n",
"# /subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.HDInsight/clusters/<cluster_name>. \n",
"# If in US East, US West 2, or US South Central, use the following instead:\n",
"# attach_config = HDInsightCompute.attach_configuration(resource_id='<resource_id>',\n",
"# ssh_port=22,\n",
"# username=os.environ.get('hdiusername', '<ssh_username>'),\n",
"# password=os.environ.get('hdipassword', '<my_password>'))\n",
" hdi_compute = ComputeTarget.attach(workspace=ws, \n",
" name='myhdi', \n",
" attach_configuration=attach_config)\n",

View File

@@ -167,7 +167,10 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "user_managed_env",
"msdoc": "how-to-track-experiments.md"
},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
@@ -192,7 +195,10 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "src",
"msdoc": "how-to-track-experiments.md"
},
"outputs": [],
"source": [
"from azureml.core import ScriptRunConfig\n",
@@ -204,7 +210,10 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "run",
"msdoc": "how-to-track-experiments.md"
},
"outputs": [],
"source": [
"run = exp.submit(src)"

View File

@@ -266,6 +266,22 @@
" ssh_port=22,\n",
" username=username,\n",
" private_key_file='./.ssh/id_rsa')\n",
"\n",
"\n",
"# The following Azure regions do not support attaching a virtual machine using the public IP address of the VM.\n",
"# Instead, use the Azure Resource Manager ID of the VM with the resource_id parameter:\n",
"# US East\n",
"# US West 2\n",
"# US South Central\n",
"# The resource ID of the VM can be constructed using the\n",
"# subscription ID, resource group name, and VM name using the following string format:\n",
"# /subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Compute/virtualMachines/<vm_name>. \n",
"# If in US East, US West 2, or US South Central, use the following instead:\n",
"# attach_config = RemoteCompute.attach_configuration(resource_id='<resource_id>',\n",
"# ssh_port=22,\n",
"# username='username',\n",
"# private_key_file='./.ssh/id_rsa')\n",
"\n",
" attached_dsvm_compute = ComputeTarget.attach(workspace=ws,\n",
" name=compute_target_name,\n",
" attach_configuration=attach_config)\n",

View File

@@ -80,7 +80,9 @@
"metadata": {
"tags": [
"install"
]
],
"name": "load_ws",
"msdoc": "how-to-track-experiments.md"
},
"outputs": [],
"source": [
@@ -113,7 +115,10 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"name": "load_data",
"msdoc": "how-to-track-experiments.md"
},
"outputs": [],
"source": [
"from sklearn.datasets import load_diabetes\n",
@@ -155,7 +160,9 @@
"tags": [
"local run",
"outputs upload"
]
],
"name": "create_experiment",
"msdoc": "how-to-track-experiments.md"
},
"outputs": [],
"source": [

View File

@@ -1,10 +0,0 @@
name: labeled-datasets
dependencies:
- pip:
- azureml-sdk
- azureml-dataprep
- pandas
- fuse
- azureml.contrib.dataset
- matplotlib
- torchvision

View File

@@ -141,7 +141,7 @@
"from azureml.core.compute_target import ComputeTargetException\n",
"\n",
"# choose a name for your cluster\n",
"cluster_name = \"your-cluster-name\"\n",
"cluster_name = \"gpu-cluster\"\n",
"\n",
"try:\n",
" compute_target = ComputeTarget(workspace=workspace, name=cluster_name)\n",

View File

@@ -3,5 +3,5 @@ dependencies:
- pip:
- azureml-sdk
- azureml-dataprep
- pandas
- pandas<=0.23.4
- fuse

View File

@@ -23,8 +23,8 @@
"\n",
"The detailed APIs to be demoed in this script are:\n",
"- Create Tabular Dataset instance\n",
"- Assign fine timestamp column and coarse timestamp column for Tabular Dataset to activate Time Series related APIs\n",
"- Clear fine timestamp column and coarse timestamp column\n",
"- Assign timestamp column and partition timestamp column for Tabular Dataset to activate Time Series related APIs\n",
"- Clear timestamp column and partition timestamp column\n",
"- Filter in data before a specific time\n",
"- Filter in data after a specific time\n",
"- Filter in data in a specific time range\n",
@@ -157,7 +157,7 @@
"source": [
"Create Tabular Dataset instance from blob storage datapath.\n",
"\n",
"**TIP:** you can set virtual columns in the partition_format. I.e. if you partition the weather data by state and city, the path can be '/{STATE}/{CITY}/{coarse_time:yyy/MM}/data.parquet'. STATE and CITY would then appear as virtual columns in the dataset, allowing for efficient filtering by these grains. "
"**TIP:** you can set virtual columns in the partition_format. I.e. if you partition the weather data by state and city, the path can be '/{STATE}/{CITY}/{partition_time:yyy/MM}/data.parquet'. STATE and CITY would then appear as virtual columns in the dataset, allowing for efficient filtering by these timestamps. "
]
},
{
@@ -167,14 +167,14 @@
"outputs": [],
"source": [
"datastore_path = [(dstore, dset_name + '/*/*/data.parquet')]\n",
"dataset = Dataset.Tabular.from_parquet_files(path=datastore_path, partition_format = dset_name + '/{coarse_time:yyyy/MM}/data.parquet')"
"dataset = Dataset.Tabular.from_parquet_files(path=datastore_path, partition_format = dset_name + '/{partition_time:yyyy/MM}/data.parquet')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Assign fine timestamp column for Tabular Dataset to activate Time Series related APIs. The column to be assigned should be a Date type, otherwise the assigning will fail."
"Assign timestamp column for Tabular Dataset to activate Time Series related APIs. The column to be assigned should be a Date type, otherwise the assigning will fail."
]
},
{
@@ -183,8 +183,8 @@
"metadata": {},
"outputs": [],
"source": [
"# for this demo, leave out coarse_time so fine_grain_timestamp is used\n",
"tsd = dataset.with_timestamp_columns(fine_grain_timestamp='datetime') # coarse_grain_timestamp='coarse_time')"
"# for this demo, leave out partition_time so timestamp is used\n",
"tsd = dataset.with_timestamp_columns(timestamp='datetime') # partition_timestamp='partition_time')"
]
},
{
@@ -280,7 +280,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**NOTE:** You must set the coarse_grain_timestamp to None to filter on the fine_grain_timestamp. The below cell will fail unless the second line is uncommented "
"**NOTE:** You must set the partition_timestamp to None to filter on the timestamp. The below cell will fail unless the second line is uncommented "
]
},
{
@@ -290,7 +290,7 @@
"outputs": [],
"source": [
"# select data that occurs within a given time range\n",
"#tsd = tsd.with_timestamp_columns(fine_grain_timestamp='datetime', coarse_grain_timestamp=None)\n",
"#tsd = tsd.with_timestamp_columns(timestamp='datetime', partition_timestamp=None)\n",
"tsd2 = tsd.time_after(datetime(2019, 1, 2)).time_before(datetime(2019, 1, 10))\n",
"tsd2.to_pandas_dataframe().head(5)"
]
@@ -371,9 +371,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"metadata": {},
"outputs": [],
"source": [
"tsd2 = tsd.drop_columns(columns=['snowDepth', 'version', 'datetime'])\n",
@@ -481,7 +479,7 @@
"metadata": {},
"outputs": [],
"source": [
"tsd2 = tsd.keep_columns(columns=['snowDepth', 'datetime', 'coarse_time'], validate=False)\n",
"tsd2 = tsd.keep_columns(columns=['snowDepth', 'datetime', 'partition_time'], validate=False)\n",
"tsd2.to_pandas_dataframe().tail()"
]
},
@@ -506,9 +504,9 @@
"metadata": {},
"source": [
"Rules for reseting are:\n",
"- You cannot assign 'None' to fine_grain_timestamp while assign a valid column name to coarse_grain_timestamp because coarse_grain_timestamp is optional while fine_grain_timestamp is mandatory for Tabular time series data.\n",
"- If you assign 'None' to fine_grain_timestamp, then both fine_grain_timestamp and coarse_grain_timestamp will all be cleared.\n",
"- If you assign only 'None' to coarse_grain_timestamp, then only coarse_grain_timestamp will be cleared."
"- You cannot assign 'None' to timestamp while assign a valid column name to partition_timestamp because partition_timestamp is optional while timestamp is mandatory for Tabular time series data.\n",
"- If you assign 'None' to timestamp, then both timestamp and partition_timestamp will all be cleared.\n",
"- If you assign only 'None' to partition_timestamp, then only partition_timestamp will be cleared."
]
},
{
@@ -519,17 +517,17 @@
"source": [
"# Illegal clearing, exception is expected.\n",
"try:\n",
" tsd2 = tsd.with_timestamp_columns(fine_grain_timestamp=None, coarse_grain_timestamp='coarse_time')\n",
" tsd2 = tsd.with_timestamp_columns(timestamp=None, partition_timestamp='partition_time')\n",
"except Exception as e:\n",
" print('Cleaning not allowed because {}'.format(str(e)))\n",
"\n",
"# clear both\n",
"tsd2 = tsd.with_timestamp_columns(fine_grain_timestamp=None, coarse_grain_timestamp=None)\n",
"tsd2 = tsd.with_timestamp_columns(timestamp=None, partition_timestamp=None)\n",
"print('after clean both with None/None, timestamp columns are: {}'.format(tsd2.timestamp_columns))\n",
"\n",
"# clear coarse_grain_timestamp only and assign 'datetime' as fine timestamp column\n",
"tsd2 = tsd2.with_timestamp_columns(fine_grain_timestamp='datetime', coarse_grain_timestamp=None)\n",
"print('after clean coarse timestamp column, timestamp columns are: {}'.format(tsd2.timestamp_columns))"
"# clear partition_timestamp only and assign 'datetime' as timestamp column\n",
"tsd2 = tsd2.with_timestamp_columns(timestamp='datetime', partition_timestamp=None)\n",
"print('after clean partition timestamp column, timestamp columns are: {}'.format(tsd2.timestamp_columns))"
]
},
{
@@ -543,7 +541,7 @@
"metadata": {
"authors": [
{
"name": "ylxiong"
"name": "jamgan"
}
],
"category": "tutorial",

View File

@@ -3,4 +3,4 @@ dependencies:
- pip:
- azureml-sdk
- azureml-dataprep
- pandas
- pandas<=0.23.4

View File

@@ -529,8 +529,9 @@
"metadata": {},
"outputs": [],
"source": [
"print(run.get_metrics())\n",
"metrics = run.get_metrics()"
"run.wait_for_completion()\n",
"metrics = run.get_metrics()\n",
"print(metrics)"
]
},
{

View File

@@ -4,6 +4,6 @@ dependencies:
- azureml-sdk
- azureml-widgets
- azureml-dataprep
- pandas
- pandas<=0.23.4
- fuse
- scikit-learn

View File

@@ -28,7 +28,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| :star:[Datasets with ML Pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb) | Train | Fashion MNIST | Remote | None | Azure ML | Dataset, Pipeline, Estimator, ScriptRun |
| :star:[Filtering data using Tabular Timeseiries Dataset related API](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb) | Filtering | NOAA | Local | None | Azure ML | Dataset, Tabular Timeseries |
| :star:[Train with Datasets (Tabular and File)](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/work-with-data/datasets-tutorial/train-with-datasets/train-with-datasets.ipynb) | Train | Iris, Diabetes | Remote | None | Azure ML | Dataset, Estimator, ScriptRun |
| [Forecasting away from training data](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/automl-forecasting-function.ipynb) | Forecasting | None | Remote | None | Azure ML AutoML | Forecasting, Confidence Intervals |
| [Forecasting away from training data](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-high-frequency/auto-ml-forecasting-function.ipynb) | Forecasting | None | Remote | None | Azure ML AutoML | Forecasting, Confidence Intervals |
| [Automated ML run with basic edition features.](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb) | Classification | Bankmarketing | AML | ACI | None | featurization, explainability, remote_run, AutomatedML |
| [Classification of credit card fraudulent transactions using Automated ML](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb) | Classification | Creditcard | AML Compute | None | None | remote_run, AutomatedML |
| [Automated ML run with featurization and model explainability.](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/regression-hardware-performance-explanation-and-featurization/auto-ml-regression-hardware-performance-explanation-and-featurization.ipynb) | Regression | MachineData | AML | ACI | None | featurization, explainability, remote_run, AutomatedML |
@@ -117,6 +117,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [enable-app-insights-in-production-service](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) | | | | | | |
| [onnx-model-register-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-model-register-and-deploy.ipynb) | | | | | | |
| [production-deploy-to-aks](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) | | | | | | |
| [production-deploy-to-aks-gpu](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.ipynb) | | | | | | |
| [tensorflow-model-register-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/tensorflow/tensorflow-model-register-and-deploy.ipynb) | | | | | | |
| [explain-model-on-amlcompute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/explain-model/azure-integration/remote-explanation/explain-model-on-amlcompute.ipynb) | | | | | | |
| [save-retrieve-explanations-run-history](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/explain-model/azure-integration/run-history/save-retrieve-explanations-run-history.ipynb) | | | | | | |

View File

@@ -102,7 +102,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.1.1rc0 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.2.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -28,7 +28,6 @@ The following tutorials are intended to provide examples of more advanced featur
| Tutorial | Description | Notebook | Task | Framework |
| --- | --- | --- | --- | --- |
| [Build an Azure Machine Learning pipeline for batch scoring](https://docs.microsoft.com/azure/machine-learning/tutorial-pipeline-batch-scoring-classification) | Create an Azure Machine Learning pipeline to run batch scoring image classification jobs | [tutorial-pipeline-batch-scoring-classification.ipynb](machine-learning-pipelines-advanced/tutorial-pipeline-batch-scoring-classification.ipynb) | Image Classification | TensorFlow
Complete these tutorials to learn how to train and deploy models using Azure Machine Learning services and Python SDK. These Notebooks accompany the tutorial articles for:
For additional documentation and resources, see the [official documentation site for Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/).

View File

@@ -30,7 +30,9 @@
"\n",
"## Prerequisites\n",
"\n",
"See prerequisites in the [Azure Machine Learning documentation](https://docs.microsoft.com/azure/machine-learning/service/tutorial-train-models-with-aml#prerequisites)."
"See prerequisites in the [Azure Machine Learning documentation](https://docs.microsoft.com/azure/machine-learning/service/tutorial-train-models-with-aml#prerequisites).\n",
"\n",
"On the computer running this notebook, conda install matplotlib, numpy, scikit-learn=0.22.1"
]
},
{
@@ -126,7 +128,8 @@
"metadata": {},
"source": [
"### Create or Attach existing compute resource\n",
"By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.\n",
"By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. You will submit Python code to run on this VM later in the tutorial. \n",
"The code below creates the compute clusters for you if they don't already exist in your workspace.\n",
"\n",
"**Creation of compute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace the code will skip the creation process."
]
@@ -263,7 +266,7 @@
"source": [
"## Train on a remote cluster\n",
"\n",
"For this task, submit the job to the remote training cluster you set up earlier. To submit a job you:\n",
"For this task, you submit the job to run on the remote training cluster you set up earlier. To submit a job you:\n",
"* Create a directory\n",
"* Create a training script\n",
"* Create an estimator object\n",
@@ -308,7 +311,7 @@
"import glob\n",
"\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.externals import joblib\n",
"import joblib\n",
"\n",
"from azureml.core import Run\n",
"from utils import load_data\n",
@@ -396,15 +399,20 @@
"source": [
"### Create an estimator\n",
"\n",
"An estimator object is used to submit the run. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as generic Estimator. Create SKLearn estimator for scikit-learn model, by specifying\n",
"An estimator object is used to submit the run. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as generic Estimator. Create an estimator by specifying\n",
"\n",
"* The name of the estimator object, `est`\n",
"* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. \n",
"* The compute target. In this case you will use the AmlCompute you created\n",
"* The training script name, train.py\n",
"* Parameters required from the training script \n",
"* An environment that contains the libraries needed to run the script\n",
"* Parameters required from the training script. \n",
"\n",
"In this tutorial, the target is AmlCompute. All files in the script folder are uploaded into the cluster nodes for execution. The data_folder is set to use the dataset."
"In this tutorial, the target is AmlCompute. All files in the script folder are uploaded into the cluster nodes for execution. The data_folder is set to use the dataset.\n",
"\n",
"First, create the environment that contains: the scikit-learn library, azureml-dataprep required for accessing the dataset, and azureml-defaults which contains the dependencies for logging metrics. The azureml-defaults also contains the dependencies required for deploying the model as a web service later in the part 2 of the tutorial.\n",
"\n",
"Once the environment is defined, register it with the Workspace to re-use it in part 2 of the tutorial."
]
},
{
@@ -417,10 +425,20 @@
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"# to install required packages\n",
"env = Environment('my_env')\n",
"cd = CondaDependencies.create(pip_packages=['azureml-sdk','scikit-learn==0.22.1','azureml-dataprep[pandas,fuse]>=1.1.14'])\n",
"env = Environment('tutorial-env')\n",
"cd = CondaDependencies.create(pip_packages=['azureml-dataprep[pandas,fuse]>=1.1.14', 'azureml-defaults'], conda_packages = ['scikit-learn==0.22.1'])\n",
"\n",
"env.python.conda_dependencies = cd"
"env.python.conda_dependencies = cd\n",
"\n",
"# Register environment to re-use later\n",
"env.register(workspace = ws)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, create the estimator by specifying the training script, compute target and environment."
]
},
{
@@ -433,7 +451,7 @@
},
"outputs": [],
"source": [
"from azureml.train.sklearn import SKLearn\n",
"from azureml.train.estimator import Estimator\n",
"\n",
"script_params = {\n",
" # to mount files referenced by mnist dataset\n",
@@ -441,7 +459,7 @@
" '--regularization': 0.5\n",
"}\n",
"\n",
"est = SKLearn(source_directory=script_folder,\n",
"est = Estimator(source_directory=script_folder,\n",
" script_params=script_params,\n",
" compute_target=compute_target,\n",
" environment_definition=env,\n",
@@ -666,7 +684,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
"version": "3.7.6"
},
"msauthor": "roastala"
},

View File

@@ -159,16 +159,14 @@
"metadata": {},
"outputs": [],
"source": [
"# download test data\n",
"import os\n",
"import urllib.request\n",
"from azureml.core import Dataset\n",
"from azureml.opendatasets import MNIST\n",
"\n",
"data_folder = os.path.join(os.getcwd(), 'data')\n",
"os.makedirs(data_folder, exist_ok=True)\n",
"\n",
"\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename=os.path.join(data_folder, 'test-images.gz'))\n",
"urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename=os.path.join(data_folder, 'test-labels.gz'))"
"mnist_file_dataset = MNIST.get_file_dataset()\n",
"mnist_file_dataset.download(data_folder, overwrite=True)"
]
},
{
@@ -191,8 +189,8 @@
"\n",
"data_folder = os.path.join(os.getcwd(), 'data')\n",
"# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster\n",
"X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0\n",
"y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)"
"X_test = load_data(os.path.join(data_folder, 't10k-images-idx3-ubyte.gz'), False) / 255.0\n",
"y_test = load_data(os.path.join(data_folder, 't10k-labels-idx1-ubyte.gz'), True).reshape(-1)"
]
},
{
@@ -348,7 +346,7 @@
"from azureml.core.conda_dependencies import CondaDependencies \n",
"\n",
"myenv = CondaDependencies()\n",
"myenv.add_pip_package(\"scikit-learn==0.22.1\")\n",
"myenv.add_conda_package(\"scikit-learn==0.22.1\")\n",
"myenv.add_pip_package(\"azureml-defaults\")\n",
"\n",
"with open(\"myenv.yml\",\"w\") as f:\n",
@@ -405,7 +403,7 @@
"metadata": {},
"source": [
"### Deploy in ACI\n",
"Estimated time to complete: **about 7-8 minutes**\n",
"Estimated time to complete: **about 2-5 minutes**\n",
"\n",
"Configure the image and deploy. The following code goes through these steps:\n",
"\n",
@@ -436,7 +434,7 @@
"from azureml.core.environment import Environment\n",
"\n",
"\n",
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
"myenv = Environment.get(workspace=ws, name=\"tutorial-env\", version=\"1\")\n",
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)\n",
"\n",
"service = Model.deploy(workspace=ws, \n",
@@ -637,7 +635,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.7.6"
},
"msauthor": "sgilley"
},

View File

@@ -4,3 +4,5 @@ dependencies:
- azureml-sdk
- matplotlib
- sklearn
- pandas
- azureml-opendatasets