Compare commits

...

13 Commits

Author SHA1 Message Date
Cody
4b692df4c5 Update README.md 2020-10-27 17:04:59 -07:00
Harneet Virk
1b0d75cb45 Merge pull request #1206 from Azure/release_update/Release-71
update samples from Release-71 as a part of  SDK 1.17.0 release
2020-10-26 22:29:48 -07:00
amlrelsa-ms
5c38272fb4 update samples from Release-71 as a part of SDK release 2020-10-27 04:11:39 +00:00
Harneet Virk
e026c56f19 Merge pull request #1200 from Azure/cody/add-new-repo-link
update readme
2020-10-22 10:50:03 -07:00
Cody
4aad830f1c update readme 2020-10-22 09:13:20 -07:00
Harneet Virk
c1b125025a Merge pull request #1198 from harneetvirk/master
Fixing/Removing broken links
2020-10-20 12:30:46 -07:00
Harneet Virk
9f364f7638 Update README.md 2020-10-20 12:30:03 -07:00
Harneet Virk
4beb749a76 Fixing/Removing the broken links 2020-10-20 12:28:45 -07:00
Harneet Virk
04fe8c4580 Merge pull request #1191 from savitamittal1/patch-4
Update README.md
2020-10-17 08:48:20 -07:00
Harneet Virk
498018451a Merge pull request #1193 from savitamittal1/patch-6
Update automl-databricks-local-with-deployment.ipynb
2020-10-17 08:47:54 -07:00
savitamittal1
04305e33f0 Update automl-databricks-local-with-deployment.ipynb 2020-10-16 23:58:12 -07:00
savitamittal1
d22e76d5e0 Update README.md 2020-10-16 23:53:41 -07:00
Harneet Virk
d71c482f75 Merge pull request #1184 from Azure/release_update/Release-70
update samples from Release-70 as a part of  SDK 1.16.0 release
2020-10-12 22:24:25 -07:00
31 changed files with 649 additions and 94 deletions

View File

@@ -1,5 +1,7 @@
# Azure Machine Learning service example notebooks # Azure Machine Learning service example notebooks
> A community-driven repository of training and scoring examples can be found at https://github.com/Azure/azureml-examples
This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud. This repository contains example notebooks demonstrating the [Azure Machine Learning](https://azure.microsoft.com/en-us/services/machine-learning-service/) Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.
![Azure ML Workflow](https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/media/concept-azure-machine-learning-architecture/workflow.png) ![Azure ML Workflow](https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/machine-learning/media/concept-azure-machine-learning-architecture/workflow.png)

View File

@@ -103,7 +103,7 @@
"source": [ "source": [
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -97,11 +97,10 @@ jupyter notebook
<a name="databricks"></a> <a name="databricks"></a>
## Setup using Azure Databricks ## Setup using Azure Databricks
**NOTE**: Please create your Azure Databricks cluster as v6.0 (high concurrency preferred) with **Python 3** (dropdown). **NOTE**: Please create your Azure Databricks cluster as v7.1 (high concurrency preferred) with **Python 3** (dropdown).
**NOTE**: You should at least have contributor access to your Azure subcription to run the notebook. **NOTE**: You should at least have contributor access to your Azure subcription to run the notebook.
- Please remove the previous SDK version if there is any and install the latest SDK by installing **azureml-sdk[automl]** as a PyPi library in Azure Databricks workspace. - You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl).
- You can find the detail Readme instructions at [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks). - Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks/automl) and import into the Azure databricks workspace.
- Download the sample notebook automl-databricks-local-01.ipynb from [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks) and import into the Azure databricks workspace.
- Attach the notebook to the cluster. - Attach the notebook to the cluster.
<a name="samples"></a> <a name="samples"></a>

View File

@@ -9,7 +9,7 @@ dependencies:
- numpy==1.18.5 - numpy==1.18.5
- cython - cython
- urllib3<1.24 - urllib3<1.24
- scipy==1.4.1 - scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1 - scikit-learn==0.22.1
- pandas==0.25.1 - pandas==0.25.1
- py-xgboost<=0.90 - py-xgboost<=0.90
@@ -24,5 +24,5 @@ dependencies:
- pytorch-transformers==1.0.0 - pytorch-transformers==1.0.0
- spacy==2.1.8 - spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz - https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.16.0/validated_win32_requirements.txt [--no-deps] - -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_win32_requirements.txt [--no-deps]

View File

@@ -9,7 +9,7 @@ dependencies:
- numpy==1.18.5 - numpy==1.18.5
- cython - cython
- urllib3<1.24 - urllib3<1.24
- scipy==1.4.1 - scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1 - scikit-learn==0.22.1
- pandas==0.25.1 - pandas==0.25.1
- py-xgboost<=0.90 - py-xgboost<=0.90
@@ -24,5 +24,5 @@ dependencies:
- pytorch-transformers==1.0.0 - pytorch-transformers==1.0.0
- spacy==2.1.8 - spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz - https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.16.0/validated_linux_requirements.txt [--no-deps] - -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_linux_requirements.txt [--no-deps]

View File

@@ -10,7 +10,7 @@ dependencies:
- numpy==1.18.5 - numpy==1.18.5
- cython - cython
- urllib3<1.24 - urllib3<1.24
- scipy==1.4.1 - scipy>=1.4.1,<=1.5.2
- scikit-learn==0.22.1 - scikit-learn==0.22.1
- pandas==0.25.1 - pandas==0.25.1
- py-xgboost<=0.90 - py-xgboost<=0.90
@@ -25,4 +25,4 @@ dependencies:
- pytorch-transformers==1.0.0 - pytorch-transformers==1.0.0
- spacy==2.1.8 - spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz - https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.16.0/validated_darwin_requirements.txt [--no-deps] - -r https://automlcesdkdataresources.blob.core.windows.net/validated-requirements/1.17.0/validated_darwin_requirements.txt [--no-deps]

View File

@@ -105,7 +105,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -93,7 +93,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -88,7 +88,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -92,7 +92,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -114,7 +114,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -87,7 +87,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -97,7 +97,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -94,7 +94,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -82,7 +82,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -96,7 +96,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -98,7 +98,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -92,7 +92,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -1,9 +1,21 @@
# Adding an init script to an Azure Databricks cluster # Automated ML introduction
Automated machine learning (automated ML) builds high quality machine learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, automated ML will give you a high quality machine learning model that you can use for predictions.
The [azureml-cluster-init.sh](./azureml-cluster-init.sh) script configures the environment to
1. Install the latest AutoML library
To create the Azure Databricks cluster-scoped init script If you are new to Data Science, automated ML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
If you are an experienced data scientist, automated ML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. Automated ML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
# Install Instructions using Azure Databricks :
#### For Databricks non ML runtime 7.1(scala 2.21, spark 3.0.0) and up, Install Automated Machine Learning sdk by adding and running the following command as the first cell of your notebook. This will install AutoML dependencies specific for your notebook.
%pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt
#### For Databricks non ML runtime 7.0 and lower, Install Automated Machine Learning sdk using init script as shown below before running the notebook.**
**Create the Azure Databricks cluster-scoped init script 'azureml-cluster-init.sh' as below
1. Create the base directory you want to store the init script in if it does not exist. 1. Create the base directory you want to store the init script in if it does not exist.
``` ```
@@ -15,7 +27,7 @@ To create the Azure Databricks cluster-scoped init script
dbutils.fs.put("/databricks/init/azureml-cluster-init.sh",""" dbutils.fs.put("/databricks/init/azureml-cluster-init.sh","""
#!/bin/bash #!/bin/bash
set -ex set -ex
/databricks/python/bin/pip install -r https://aka.ms/automl_linux_requirements.txt /databricks/python/bin/pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt
""", True) """, True)
``` ```
@@ -24,6 +36,8 @@ To create the Azure Databricks cluster-scoped init script
display(dbutils.fs.ls("dbfs:/databricks/init/azureml-cluster-init.sh")) display(dbutils.fs.ls("dbfs:/databricks/init/azureml-cluster-init.sh"))
``` ```
**Install libraries to cluster using init script 'azureml-cluster-init.sh' created in previous step
1. Configure the cluster to run the script. 1. Configure the cluster to run the script.
* Using the cluster configuration page * Using the cluster configuration page
1. On the cluster configuration page, click the Advanced Options toggle. 1. On the cluster configuration page, click the Advanced Options toggle.

View File

@@ -17,9 +17,9 @@
"\n", "\n",
"**For Databricks non ML runtime 7.1(scala 2.21, spark 3.0.0) and up, Install AML sdk by running the following command in the first cell of the notebook.**\n", "**For Databricks non ML runtime 7.1(scala 2.21, spark 3.0.0) and up, Install AML sdk by running the following command in the first cell of the notebook.**\n",
"\n", "\n",
"%pip install -r https://aka.ms/automl_linux_requirements.txt\n", "%pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt\n",
"\n", "\n",
"**For Databricks non ML runtime 7.0 and lower, Install AML sdk using init script as shown in [readme](readme.md) before running this notebook.**\n" "**For Databricks non ML runtime 7.0 and lower, Install AML sdk using init script as shown in [readme](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-databricks/automl/README.md) before running this notebook.**\n"
] ]
}, },
{ {

View File

@@ -17,9 +17,9 @@
"\n", "\n",
"**For Databricks non ML runtime 7.1(scala 2.21, spark 3.0.0) and up, Install AML sdk by running the following command in the first cell of the notebook.**\n", "**For Databricks non ML runtime 7.1(scala 2.21, spark 3.0.0) and up, Install AML sdk by running the following command in the first cell of the notebook.**\n",
"\n", "\n",
"%pip install -r https://aka.ms/automl_linux_requirements.txt\n", "%pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt\n",
"\n", "\n",
"**For Databricks non ML runtime 7.0 and lower, Install AML sdk using init script as shown in [readme](readme.md) before running this notebook.**" "**For Databricks non ML runtime 7.0 and lower, Install AML sdk using init script as shown in [readme](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-databricks/automl/README.md) before running this notebook.**"
] ]
}, },
{ {

View File

@@ -44,9 +44,11 @@
"import azureml.core\n", "import azureml.core\n",
"from azureml.core import Workspace, Experiment, Datastore, Dataset\n", "from azureml.core import Workspace, Experiment, Datastore, Dataset\n",
"from azureml.core.compute import ComputeTarget, AmlCompute\n", "from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.runconfig import RunConfiguration\n",
"from azureml.exceptions import ComputeTargetException\n", "from azureml.exceptions import ComputeTargetException\n",
"from azureml.pipeline.steps import HyperDriveStep, HyperDriveStepRun\n", "from azureml.pipeline.steps import HyperDriveStep, HyperDriveStepRun, PythonScriptStep\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n", "from azureml.pipeline.core import Pipeline, PipelineData, TrainingOutput\n",
"from azureml.train.dnn import TensorFlow\n", "from azureml.train.dnn import TensorFlow\n",
"# from azureml.train.hyperdrive import *\n", "# from azureml.train.hyperdrive import *\n",
"from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal\n", "from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal\n",
@@ -232,7 +234,22 @@
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
" compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n", " compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)\n",
"\n", "\n",
"print(\"Azure Machine Learning Compute attached\")" "print(\"Azure Machine Learning Compute attached\")\n",
"\n",
"cpu_cluster_name = \"cpu-cluster\"\n",
"\n",
"try:\n",
" cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n",
" print(\"Found existing cpu-cluster\")\n",
"except ComputeTargetException:\n",
" print(\"Creating new cpu-cluster\")\n",
" \n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_D2_V2\",\n",
" min_nodes=0,\n",
" max_nodes=4)\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
" \n",
" cpu_cluster.wait_for_completion(show_output=True)"
] ]
}, },
{ {
@@ -401,7 +418,15 @@
"metrics_output_name = 'metrics_output'\n", "metrics_output_name = 'metrics_output'\n",
"metrics_data = PipelineData(name='metrics_data',\n", "metrics_data = PipelineData(name='metrics_data',\n",
" datastore=datastore,\n", " datastore=datastore,\n",
" pipeline_output_name=metrics_output_name)\n", " pipeline_output_name=metrics_output_name,\n",
" training_output=TrainingOutput(\"Metrics\"))\n",
"\n",
"model_output_name = 'model_output'\n",
"saved_model = PipelineData(name='saved_model',\n",
" datastore=datastore,\n",
" pipeline_output_name=model_output_name,\n",
" training_output=TrainingOutput(\"Model\",\n",
" model_file=\"outputs/model/saved_model.pb\"))\n",
"\n", "\n",
"hd_step_name='hd_step01'\n", "hd_step_name='hd_step01'\n",
"hd_step = HyperDriveStep(\n", "hd_step = HyperDriveStep(\n",
@@ -409,7 +434,39 @@
" hyperdrive_config=hd_config,\n", " hyperdrive_config=hd_config,\n",
" estimator_entry_script_arguments=['--data-folder', data_folder],\n", " estimator_entry_script_arguments=['--data-folder', data_folder],\n",
" inputs=[data_folder],\n", " inputs=[data_folder],\n",
" metrics_output=metrics_data)" " outputs=[metrics_data, saved_model])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Find and register best model\n",
"When all the jobs finish, we can choose to register the model that has the highest accuracy through an additional PythonScriptStep.\n",
"\n",
"Through this additional register_model_step, we register the chosen files as a model named `tf-dnn-mnist` under the workspace for deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"conda_dep = CondaDependencies()\n",
"conda_dep.add_pip_package(\"azureml-sdk\")\n",
"\n",
"rcfg = RunConfiguration(conda_dependencies=conda_dep)\n",
"\n",
"register_model_step = PythonScriptStep(script_name='register_model.py',\n",
" name=\"register_model_step01\",\n",
" inputs=[saved_model],\n",
" compute_target=cpu_cluster,\n",
" arguments=[\"--saved-model\", saved_model],\n",
" allow_reuse=True,\n",
" runconfig=rcfg)\n",
"\n",
"register_model_step.run_after(hd_step)"
] ]
}, },
{ {
@@ -425,7 +482,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"pipeline = Pipeline(workspace=ws, steps=[hd_step])\n", "pipeline = Pipeline(workspace=ws, steps=[hd_step, register_model_step])\n",
"pipeline_run = exp.submit(pipeline)" "pipeline_run = exp.submit(pipeline)"
] ]
}, },
@@ -500,58 +557,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Find and register best model\n", "For model deployment, please refer to [Training, hyperparameter tune, and deploy with TensorFlow](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb)."
"When all the jobs finish, we can find out the one that has the highest accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"hd_step_run = HyperDriveStepRun(step_run=pipeline_run.find_step_run(hd_step_name)[0])\n",
"best_run = hd_step_run.get_best_run_by_primary_metric()\n",
"best_run"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's list the model files uploaded during the run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(best_run.get_file_names())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can then register the folder (and all files in it) as a model named `tf-dnn-mnist` under the workspace for deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model = best_run.register_model(model_name='tf-dnn-mnist', model_path='outputs/model')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For model deployment, please refer to [Training, hyperparameter tune, and deploy with TensorFlow](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/deployment/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb)."
] ]
} }
], ],

View File

@@ -0,0 +1,21 @@
import argparse
import json
import os
import azureml.core
from azureml.core import Workspace, Experiment, Model
from azureml.core import Run
from azureml.train.hyperdrive import HyperDriveRun
from shutil import copy2
parser = argparse.ArgumentParser()
parser.add_argument('--saved-model', type=str, dest='saved_model', help='path to saved model file')
args = parser.parse_args()
model_output_dir = './model/'
os.makedirs(model_output_dir, exist_ok=True)
copy2(args.saved_model, model_output_dir)
ws = Run.get_context().experiment.workspace
model = Model.register(workspace=ws, model_name='tf-dnn-mnist', model_path=model_output_dir)

View File

@@ -100,7 +100,7 @@
"\n", "\n",
"# Check core SDK version number\n", "# Check core SDK version number\n",
"\n", "\n",
"print(\"This notebook was created using SDK version 1.16.0, you are currently running version\", azureml.core.VERSION)" "print(\"This notebook was created using SDK version 1.17.0, you are currently running version\", azureml.core.VERSION)"
] ]
}, },
{ {

View File

@@ -140,4 +140,5 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [img-classification-part2-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb) | | | | | | | | [img-classification-part2-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb) | | | | | | |
| [img-classification-part3-deploy-encrypted](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part3-deploy-encrypted.ipynb) | | | | | | | | [img-classification-part3-deploy-encrypted](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/image-classification-mnist-data/img-classification-part3-deploy-encrypted.ipynb) | | | | | | |
| [tutorial-pipeline-batch-scoring-classification](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/machine-learning-pipelines-advanced/tutorial-pipeline-batch-scoring-classification.ipynb) | | | | | | | | [tutorial-pipeline-batch-scoring-classification](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/machine-learning-pipelines-advanced/tutorial-pipeline-batch-scoring-classification.ipynb) | | | | | | |
| [azureml-quickstart](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/quickstart/azureml-quickstart.ipynb) | | | | | | |
| [regression-automated-ml](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb) | | | | | | | | [regression-automated-ml](https://github.com/Azure/MachineLearningNotebooks/blob/master//tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb) | | | | | | |

View File

@@ -102,7 +102,7 @@
"source": [ "source": [
"import azureml.core\n", "import azureml.core\n",
"\n", "\n",
"print(\"This notebook was created using version 1.16.0 of the Azure ML SDK\")\n", "print(\"This notebook was created using version 1.17.0 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
] ]
}, },

View File

@@ -16,6 +16,7 @@ The following tutorials are intended to provide an introductory overview of Azur
| Tutorial | Description | Notebook | Task | Framework | | Tutorial | Description | Notebook | Task | Framework |
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
| Azure Machine Learning in 10 minutes | Learn how to create and attach compute instances to notebooks, run an image classification model, track model metrics, and deploy a model| [quickstart](quickstart/azureml-quickstart.ipynb) | Learn Azure Machine Learning Concepts | PyTorch
| [Get Started (day1)](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) | Learn the fundamental concepts of Azure Machine Learning to help onboard your existing code to Azure Machine Learning. This tutorial focuses heavily on submitting machine learning jobs to scalable cloud-based compute clusters. | [get-started-day1](get-started-day1/day1-part1-setup.ipynb) | Learn Azure Machine Learning Concepts | PyTorch | [Get Started (day1)](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) | Learn the fundamental concepts of Azure Machine Learning to help onboard your existing code to Azure Machine Learning. This tutorial focuses heavily on submitting machine learning jobs to scalable cloud-based compute clusters. | [get-started-day1](get-started-day1/day1-part1-setup.ipynb) | Learn Azure Machine Learning Concepts | PyTorch
| [Train your first ML Model](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-train) | Learn the foundational design patterns in Azure Machine Learning and train a scikit-learn model based on a diabetes data set. | [tutorial-quickstart-train-model.ipynb](create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | Regression | Scikit-Learn | [Train your first ML Model](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-train) | Learn the foundational design patterns in Azure Machine Learning and train a scikit-learn model based on a diabetes data set. | [tutorial-quickstart-train-model.ipynb](create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | Regression | Scikit-Learn
| [Train an image classification model](https://docs.microsoft.com/azure/machine-learning/tutorial-train-models-with-aml) | Train a scikit-learn image classification model. | [img-classification-part1-training.ipynb](image-classification-mnist-data/img-classification-part1-training.ipynb) | Image Classification | Scikit-Learn | [Train an image classification model](https://docs.microsoft.com/azure/machine-learning/tutorial-train-models-with-aml) | Train a scikit-learn image classification model. | [img-classification-part1-training.ipynb](image-classification-mnist-data/img-classification-part1-training.ipynb) | Image Classification | Scikit-Learn

View File

@@ -246,7 +246,7 @@
"\n", "\n",
"ws = Workspace.from_config()\n", "ws = Workspace.from_config()\n",
"ct = ws.compute_targets['cpu-cluster']\n", "ct = ws.compute_targets['cpu-cluster']\n",
"ct.delete()" "# ct.delete()"
] ]
}, },
{ {

View File

@@ -0,0 +1,482 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/tutorials/quickstart/azureml-quickstart.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial: Azure Machine Learning Quickstart\n",
"\n",
"In this tutorial, you learn how to quickly get started with Azure Machine Learning. Using a *compute instance* - a fully managed cloud-based VM that is pre-configured with the latest data science tools - you will train an image classification model using the CIFAR10 dataset.\n",
"\n",
"In this tutorial you will learn how to:\n",
"\n",
"* Create a compute instance and attach to a notebook\n",
"* Train an image classification model and log metrics\n",
"* Deploy the model\n",
"\n",
"## Prerequisites\n",
"\n",
"1. An Azure Machine Learning workspace\n",
"1. Familiar with the Python language and machine learning workflows.\n",
"\n",
"\n",
"## Create compute & attach to notebook\n",
"\n",
"To run this notebook you will need to create an Azure Machine Learning _compute instance_. The benefits of a compute instance over a local machine (e.g. laptop) or cloud VM are as follows:\n",
"\n",
"* It is a pre-configured with all the latest data science libaries (e.g. panads, scikit, TensorFlow, PyTorch) and tools (Jupyter, RStudio). In this tutorial we make extensive use of PyTorch, AzureML SDK, matplotlib and we do not need to install these components on a compute instance.\n",
"* Notebooks are seperate from the compute instance - this means that you can develop your notebook on a small VM size, and then seamlessly scale up (and/or use a GPU-enabled) the machine when needed to train a model.\n",
"* You can easily turn on/off the instance to control costs. \n",
"\n",
"To create compute, click on the + button at the top of the notebook viewer in Azure Machine Learning Studio:\n",
"\n",
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create.PNG\" width=\"500\"/>\n",
"\n",
"This will pop up the __New compute instance__ blade, provide a valid __Compute name__ (valid characters are upper and lower case letters, digits, and the - character). Then click on __Create__. \n",
"\n",
"It will take approximately 3 minutes for the compute to be ready. When the compute is ready you will see a green light next to the compute name at the top of the notebook viewer:\n",
"\n",
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/ci-create2.PNG\" width=\"500\"/>\n",
"\n",
"You will also notice that the notebook is attached to the __Python 3.6 - AzureML__ jupyter Kernel. Other kernels can be selected such as R. In addition, if you did have other instances you can switch to them by simply using the dropdown menu next to the Compute label.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Data\n",
"\n",
"For this tutorial, you will use the CIFAR10 dataset. It has the classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The images in CIFAR-10 three-channel color images of 32x32 pixels in size.\n",
"\n",
"The code cell below uses the PyTorch API to download the data to your compute instance, which should be quick (around 15 seconds). The data is divided into training and test sets.\n",
"\n",
"* **NOTE: The data is downloaded to the compute instance (in the `/tmp` directory) and not a durable cloud-based store like Azure Blob Storage or Azure Data Lake. This means if you delete the compute instance the data will be lost. The [getting started with Azure Machine Learning tutorial series](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) shows how to create an Azure Machine Learning *dataset*, which aids durability, versioning, and collaboration.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600881820920
}
},
"outputs": [],
"source": [
"import torch\n",
"import torch.optim as optim\n",
"import torchvision\n",
"import torchvision.transforms as transforms\n",
"\n",
"transform = transforms.Compose(\n",
" [transforms.ToTensor(),\n",
" transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n",
"\n",
"trainset = torchvision.datasets.CIFAR10(root='/tmp/data', train=True,\n",
" download=True, transform=transform)\n",
"trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,\n",
" shuffle=True, num_workers=2)\n",
"\n",
"testset = torchvision.datasets.CIFAR10(root='/tmp/data', train=False,\n",
" download=True, transform=transform)\n",
"testloader = torch.utils.data.DataLoader(testset, batch_size=4,\n",
" shuffle=False, num_workers=2)\n",
"\n",
"classes = ('plane', 'car', 'bird', 'cat',\n",
" 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Take a look at the data\n",
"In the following cell, you have some python code that displays the first batch of 4 CIFAR10 images:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600882160868
}
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"def imshow(img):\n",
" img = img / 2 + 0.5 # unnormalize\n",
" npimg = img.numpy()\n",
" plt.imshow(np.transpose(npimg, (1, 2, 0)))\n",
" plt.show()\n",
"\n",
"\n",
"# get some random training images\n",
"dataiter = iter(trainloader)\n",
"images, labels = dataiter.next()\n",
"\n",
"# show images\n",
"imshow(torchvision.utils.make_grid(images))\n",
"# print labels\n",
"print(' '.join('%5s' % classes[labels[j]] for j in range(4)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train model and log metrics\n",
"\n",
"In the directory `model` you will see a file called [model.py](./model/model.py) that defines the neural network architecture. The model is trained using the code below.\n",
"\n",
"* **Note: The model training take around 4 minutes to complete. The benefit of a compute instance is that the notebooks are separate from the compute - therefore you can easily switch to a different size/type of instance. For example, you could switch to run this training on a GPU-based compute instance if you had one provisioned. In the code below you can see that we have included `torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")`, which detects whether you are using a CPU or GPU machine.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600882387754
},
"tags": [
"local run"
]
},
"outputs": [],
"source": [
"from model.model import Net\n",
"from azureml.core import Experiment\n",
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"\n",
"device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
"device\n",
"\n",
"exp = Experiment(workspace=ws, name=\"cifar10-experiment\")\n",
"run = exp.start_logging(snapshot_directory=None)\n",
"\n",
"# define convolutional network\n",
"net = Net()\n",
"net.to(device)\n",
"\n",
"# set up pytorch loss / optimizer\n",
"criterion = torch.nn.CrossEntropyLoss()\n",
"optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)\n",
"\n",
"run.log(\"learning rate\", 0.001)\n",
"run.log(\"momentum\", 0.9)\n",
"\n",
"# train the network\n",
"for epoch in range(1):\n",
" running_loss = 0.0\n",
" for i, data in enumerate(trainloader, 0):\n",
" # unpack the data\n",
" inputs, labels = data[0].to(device), data[1].to(device)\n",
"\n",
" # zero the parameter gradients\n",
" optimizer.zero_grad()\n",
"\n",
" # forward + backward + optimize\n",
" outputs = net(inputs)\n",
" loss = criterion(outputs, labels)\n",
" loss.backward()\n",
" optimizer.step()\n",
"\n",
" # print statistics\n",
" running_loss += loss.item()\n",
" if i % 2000 == 1999:\n",
" loss = running_loss / 2000\n",
" run.log(\"loss\", loss)\n",
" print(f'epoch={epoch + 1}, batch={i + 1:5}: loss {loss:.2f}')\n",
" running_loss = 0.0\n",
"\n",
"print('Finished Training')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you have executed the cell below you can view the metrics updating in real time in the Azure Machine Learning studio:\n",
"\n",
"1. Select **Experiments** (left-hand menu)\n",
"1. Select **cifar10-experiment**\n",
"1. Select **Run 1**\n",
"1. Select the **Metrics** Tab\n",
"\n",
"The metrics tab will display the following graph:\n",
"\n",
"<img src=\"https://dsvmamlstorage127a5f726f.blob.core.windows.net/images/metrics-capture.PNG\" alt=\"dataset details\" width=\"500\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Understand the code\n",
"\n",
"The code is based on the [Pytorch 60minute Blitz](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py) where we have also added a few additional lines of code to track the loss metric as the neural network trains.\n",
"\n",
"| Code | Description | \n",
"| ------------- | ---------- |\n",
"| `experiment = Experiment( ... )` | [Experiment](https://docs.microsoft.com/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py&preserve-view=true) provides a simple way to organize multiple runs under a single name. Later you can see how experiments make it easy to compare metrics between dozens of runs. |\n",
"| `run.log()` | This will log the metrics to Azure Machine Learning. |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Version control models with the Model Registry\n",
"\n",
"You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. Azure Machine Learning supports any model that can be loaded through Python 3.\n",
"\n",
"The code below does:\n",
"\n",
"1. Saves the model on the compute instance\n",
"1. Uploads the model file to the run (if you look in the experiment on Azure Machine Learning studio you should see on the **Outputs + logs** tab the model has been saved in the run)\n",
"1. Registers the uploaded model file\n",
"1. Transitions the run to a completed state"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1600888071066
},
"tags": [
"register model from file"
]
},
"outputs": [],
"source": [
"from azureml.core import Model\n",
"\n",
"PATH = 'cifar_net.pth'\n",
"torch.save(net.state_dict(), PATH)\n",
"\n",
"run.upload_file(name=PATH, path_or_stream=PATH)\n",
"model = run.register_model(model_name='cifar10-model', \n",
" model_path=PATH,\n",
" model_framework=Model.Framework.PYTORCH,\n",
" description='cifar10 model')\n",
" \n",
"run.complete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View model in the model registry\n",
"\n",
"You can see the stored model by navigating to **Models** in the left-hand menu bar of Azure Machine Learning Studio. Click on the **cifar10-model** and you can see the details of the model like the experiement run id that created the model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy the model\n",
"\n",
"The next cell deploys the model to an Azure Container Instance so that you can score data in real-time (Azure Machine Learning also provides mechanisms to do batch scoring). A real-time endpoint allows application developers to integrate machine learning into their apps.\n",
"\n",
"* **Note: The deployment takes around 3 minutes to complete.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"deploy service",
"aci"
]
},
"outputs": [],
"source": [
"from azureml.core import Environment, Model\n",
"from azureml.core.model import InferenceConfig\n",
"from azureml.core.webservice import AciWebservice\n",
"\n",
"environment = Environment.get(ws, \"AzureML-PyTorch-1.6-CPU\")\n",
"model = Model(ws, \"cifar10-model\")\n",
"\n",
"service_name = 'cifar-service'\n",
"inference_config = InferenceConfig(entry_script='score.py', environment=environment)\n",
"aci_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
"\n",
"service = Model.deploy(workspace=ws,\n",
" name=service_name,\n",
" models=[model],\n",
" inference_config=inference_config,\n",
" deployment_config=aci_config,\n",
" overwrite=True)\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Understand the code\n",
"\n",
"| Code | Description | \n",
"| ------------- | ---------- |\n",
"| `environment = Environment.get()` | [Environment](https://docs.microsoft.com/python/api/overview/azure/ml/?view=azure-ml-py#environment) specify the Python packages, environment variables, and software settings around your training and scoring scripts. In this case, you are using a *curated environment* that has all the packages to run PyTorch. |\n",
"| `inference_config = InferenceConfig()` | This specifies the inference (scoring) configuration for the deployment such as the script to use when scoring (see below) and on what environment. |\n",
"| `service = Model.deploy()` | Deploy the model. |\n",
"\n",
"The [*scoring script*](score.py) file is has two functions:\n",
"\n",
"1. an `init` function that executes once when the service starts - in this function you normally get the model from the registry and set global variables\n",
"1. a `run(data)` function that executes each time a call is made to the service. In this function, you normally deserialize the json, run a prediction and output the predicted result.\n",
"\n",
"\n",
"## Test the model service\n",
"\n",
"In the next cell, you get some unseen data from the test loader:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"dataiter = iter(testloader)\n",
"images, labels = dataiter.next()\n",
"\n",
"# print images\n",
"imshow(torchvision.utils.make_grid(images))\n",
"print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, the next cell runs scores the above images using the deployed model service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"input_payload = json.dumps({\n",
" 'data': images.tolist()\n",
"})\n",
"\n",
"output = service.run(input_payload)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up resources\n",
"\n",
"To clean up the resources after this quickstart, firstly delete the Model service using:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"service.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next stop the compute instance by following these steps:\n",
"\n",
"1. Go to **Compute** in the left-hand menu of the Azure Machine Learning studio\n",
"1. Select your compute instance\n",
"1. Select **Stop**\n",
"\n",
"\n",
"**Important: The resources you created can be used as prerequisites to other Azure Machine Learning tutorials and how-to articles.** If you don't plan to use the resources you created, delete them, so you don't incur any charges:\n",
"\n",
"1. In the Azure portal, select **Resource groups** on the far left.\n",
"1. From the list, select the resource group you created.\n",
"1. Select **Delete resource group**.\n",
"1. Enter the resource group name. Then select **Delete**.\n",
"\n",
"You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next Steps\n",
"\n",
"In this tutorial, you have seen how to run your machine learning code on a fully managed, pre-configured cloud-based VM called a *compute instance*. Having a compute instance for your development environment removes the burden of installing data science tooling and libraries (for example, Jupyter, PyTorch, TensorFlow, Scikit) and allows you to easily scale up/down the compute power (RAM, cores) since the notebooks are separated from the VM. \n",
"\n",
"It is often the case that once you have your machine learning code working in a development environment that you want to productionize this by running as a **_job_** - ideally on a schedule or trigger (for example, arrival of new data). To this end, we recommend that you follow [**the day 1 getting started with Azure Machine Learning tutorial**](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local). This day 1 tutorial is focussed on running jobs-based machine learning code in the cloud."
]
}
],
"metadata": {
"authors": [
{
"name": "samkemp"
}
],
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
"name": "python36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"nteract": {
"version": "nteract-front-end@1.0.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,7 @@
name: azureml-quickstart
dependencies:
- pip:
- azureml-sdk
- pytorch
- torchvision
- matplotlib

View File

@@ -0,0 +1,22 @@
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x