Compare commits

...

8 Commits

Author SHA1 Message Date
vizhur
a8b08bdff0 update samples - test 2019-11-12 21:53:12 +00:00
Shané Winner
0dc3f34b86 Update index.md 2019-11-11 14:49:44 -08:00
Shané Winner
9ba7d5e5bb Update index.md 2019-11-11 14:48:05 -08:00
Shané Winner
c6ad2f8ec0 Merge pull request #654 from Azure/release_update/Release-158
update samples from Release-158 as a part of 1.0.74 SDK release
2019-11-11 10:25:18 -08:00
vizhur
33d6def8c3 update samples from Release-158 as a part of 1.0.74 SDK release 2019-11-11 16:57:02 +00:00
Shané Winner
69d4344dff Update index.md 2019-11-04 10:09:41 -08:00
Shané Winner
34aeec1439 Update index.md 2019-11-04 10:08:10 -08:00
Shané Winner
a9b9ebbf7d Merge pull request #641 from Azure/release_update/Release-27
update samples - test
2019-11-04 10:02:25 -08:00
34 changed files with 379 additions and 107 deletions

View File

@@ -103,7 +103,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.0.72.1 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.0.74.1 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},

View File

@@ -0,0 +1,127 @@
# Azure Machine Learning Batch Inference
Azure Machine Learning Batch Inference targets large inference jobs that are not time-sensitive. Batch Inference provides cost-effective inference compute scaling, with unparalleled throughput for asynchronous applications. It is optimized for high-throughput, fire-and-forget inference over large collections of data.
# Getting Started with Batch Inference Public Preview
Batch inference public preview offers a platform in which to do large inference or generic parallel map-style operations. Below introduces the major steps to use this new functionality. For a quick try, please follow the prerequisites and simply run the sample notebooks provided in this directory.
## Prerequisites
### Python package installation
Following the convention of most AzureML Public Preview features, Batch Inference SDK is currently available as a contrib package.
If you're unfamiliar with creating a new Python environment, you may follow this example for [creating a conda environment](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#local). Batch Inference package can be installed through the following pip command.
```
pip install azureml-contrib-pipeline-steps
```
### Creation of Azure Machine Learning Workspace
If you do not already have a Azure ML Workspace, please run the [configuration Notebook](../../configuration.ipynb).
## Configure a Batch Inference job
To run a Batch Inference job, you will need to gather some configuration data.
1. **ParallelRunConfig**
- **entry_script**: the local file path to the scoring script. If source_directory is specified, use relative path, otherwise use any path accessible on machine.
- **error_threshold**: the number of record failures for TabularDataset and file failures for FileDataset that should be ignored during processing. If the aggregated error count (across all workers) goes above this value, then the job will be aborted. Set to -1 to ignore all failures during processing.
- **output_action**: one of the following values
- **"append_row"**: all values output by run() method invocations will be aggregated into one unique file named parallel_run_step.txt that is created in the output location.
- **"summary_only"** scoring script will handle the output by itself. The script still needs to return one output row per successfully-processed input item. This is used for error threshold calculation (the actual value of the output row is ignored).
- **source_directory**: supporting files for scoring (optional)
- **compute_target**: only **AmlCompute** is supported currently
- **node_count**: number of compute nodes to use.
- **process_count_per_node**: number of processes per node (optional, default value is 1).
- **mini_batch_size**: the approximate amount of input data passed to each run() invocation. For FileDataset input, this is number of files user script can process in one run() call. For TabularDataset input it is approximate size of data user script can process in one run() call. E.g. 1024, 1024KB, 10MB, 1GB (optional, default value 10 files for FileDataset and 1MB for TabularDataset.)
- **logging_level**: log verbosity. Values in increasing verbosity are: 'WARNING', 'INFO', 'DEBUG' (optional, default value is 'INFO').
- **run_invocation_timeout**: run method invocation timeout period in seconds (optional, default value is 60).
- **environment**: The environment definition. This field configures the Python environment. It can be configured to use an existing Python environment or to set up a temp environment for the experiment. The definition is also responsible for setting the required application dependencies.
- **description**: name given to batch service.
2. **Scoring (entry) script**: entry point for execution, scoring script should contain two functions:
- **init()**: this function should be used for any costly or common preparation for subsequent inferences, e.g., deserializing and loading the model into a global object.
- **run(mini_batch)**: The method to be parallelized. Each invocation will have one minibatch.
- **mini_batch**: Batch inference will invoke run method and pass either a list or Pandas DataFrame as an argument to the method. Each entry in min_batch will be - a filepath if input is a FileDataset, a Pandas DataFrame if input is a TabularDataset.
- **return value**: run() method should return a Pandas DataFrame or an array. For append_row output_action, these returned elements are appended into the common output file. For summary_only, the contents of the elements are ignored. For all output actions, each returned output element indicates one successful inference of input element in the input mini-batch.
3. **Base image** (optional)
- if GPU is required, use DEFAULT_GPU_IMAGE as base image in environment. [Example GPU environment](./file-dataset-image-inference-mnist.ipynb#specify-the-environment-to-run-the-script)
Example image pull:
```python
from azureml.core.runconfig import ContainerRegistry
# use an image available in public Container Registry without authentication
public_base_image = "mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda"
# or use an image available in a private Container Registry
base_image = "myregistry.azurecr.io/mycustomimage:1.0"
base_image_registry = ContainerRegistry()
base_image_registry.address = "myregistry.azurecr.io"
base_image_registry.username = "username"
base_image_registry.password = "password"
```
## Create a batch inference job
**ParallelRunStep** is a newly added step in the azureml.contrib.pipeline.steps package. You will use it to add a step to create a batch inference job with your Azure machine learning pipeline. (Use batch inference without an Azure machine learning pipeline is not supported yet). ParallelRunStep has all the following parameters:
- **name**: this name will be used to register batch inference service, has the following naming restrictions: (unique, 3-32 chars and regex ^\[a-z\]([-a-z0-9]*[a-z0-9])?$)
- **models**: zero or more model names already registered in Azure Machine Learning model registry.
- **parallel_run_config**: ParallelRunConfig as defined above.
- **inputs**: one or more Dataset objects.
- **output**: this should be a PipelineData object encapsulating an Azure BLOB container path.
- **arguments**: list of custom arguments passed to scoring script (optional)
- **allow_reuse**: optional, default value is True. If the inputs remain the same as a previous run, it will make the previous run results immediately available (skips re-computing the step).
## Passing arguments from pipeline submission to script
Many tasks require arguments to be passed from job submission to the distributed runs. Below is an example to pass such information.
```
# from script which creates pipeline job
parallelrun_step = ParallelRunStep(
...
arguments=["--model_name", "mosaic"] # name of the model we want to use, in case we have more than one option
)
```
```
# from driver.py/score.py/task.py
import argparse
parser.add_argument('--model_name', dest="model_name")
args, unknown_args = parser.parse_known_args()
# to access values
args.model_name # "mosaic"
```
## Submit a batch inference job
You can submit a batch inference job by pipeline_run, or through REST calls with a published pipeline. To control node count using REST API/experiment, please use aml_node_count(special) pipeline parameter. A typical use case follows:
```python
pipeline = Pipeline(workspace=ws, steps=[parallelrun_step])
pipeline_run = Experiment(ws, 'name_of_pipeline_run').submit(pipeline)
```
## Monitor your batch inference job
A batch inference job can take a long time to finish. You can monitor your job's progress from Azure portal, using Azure ML widgets, view console output through SDK, or check out overview.txt in log/azureml directory.
```python
# view with widgets (will display GUI inside a browser)
from azureml.widgets import RunDetails
RunDetails(pipeline_run).show()
# simple console output
pipeline_run.wait_for_completion(show_output=True)
```
# Sample notebooks
- [file-dataset-image-inference-mnist.ipynb](./file-dataset-image-inference-mnist.ipynb) demonstrates how to run batch inference on an MNIST dataset.
- [tabular-dataset-inference-iris.ipynb](./tabular-dataset-inference-iris.ipynb) demonstrates how to run batch inference on an IRIS dataset.
![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/contrib/batch_inferencing/README.png)

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
@@ -476,7 +476,7 @@
"df = pd.read_csv(result_file, delimiter=\":\", header=None)\n",
"df.columns = [\"Filename\", \"Prediction\"]\n",
"print(\"Prediction has \", df.shape[0], \" rows\")\n",
"df.head(10)"
"df.head(10) "
]
},
{

View File

@@ -3,5 +3,4 @@ dependencies:
- pip:
- azureml-sdk
- azureml-contrib-pipeline-steps
- pandas
- azureml-widgets

View File

@@ -98,7 +98,7 @@
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n",
" \n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # can poll for a minimum number of nodes and for a specific timeout.\n",
" # if no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
@@ -404,7 +404,7 @@
"source": [
"# GUI\n",
"from azureml.widgets import RunDetails\n",
"RunDetails(pipeline_run).show()"
"RunDetails(pipeline_run).show() "
]
},
{

View File

@@ -3,5 +3,4 @@ dependencies:
- pip:
- azureml-sdk
- azureml-contrib-pipeline-steps
- pandas
- azureml-widgets

View File

@@ -334,8 +334,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
"Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while."
]
},
{

View File

@@ -230,8 +230,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.\n",
"In this example, we specify `show_output = True` to print currently running iterations to the console."
"Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while."
]
},
{

View File

@@ -308,7 +308,7 @@
"metadata": {},
"outputs": [],
"source": [
"automl_run = experiment.submit(automl_config, show_output=False)"
"automl_run = experiment.submit(automl_config, show_output=True)"
]
},
{
@@ -320,15 +320,6 @@
"automl_run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"automl_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -357,7 +348,7 @@
"metadata": {},
"outputs": [],
"source": [
"#best_run, fitted_model = local_run.get_output()"
"#best_run, fitted_model = automl_run.get_output()"
]
},
{

View File

@@ -376,7 +376,7 @@
"hidePrompt": false
},
"source": [
"We will now run the experiment, starting with 10 iterations of model search. The experiment can be continued for more iterations if more accurate results are required. You will see the currently running iterations printing to the console."
"We will now run the experiment, starting with 10 iterations of model search. The experiment can be continued for more iterations if more accurate results are required."
]
},
{

View File

@@ -345,7 +345,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"sample-akscompute-provision"
]
},
"outputs": [],
"source": [
"from azureml.core.compute import AksCompute, ComputeTarget\n",

View File

@@ -682,7 +682,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"sample-akswebservice-deploy-from-image"
]
},
"outputs": [],
"source": [
"%%time\n",

View File

@@ -166,7 +166,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"sample-localwebservice-deploy"
]
},
"outputs": [],
"source": [
"from azureml.core.webservice import LocalWebservice\n",

View File

@@ -316,9 +316,6 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"from random import randint\n",
"\n",
"aci_service_name = 'my-aci-service-15ad'\n",
"print(\"Service\", aci_service_name)\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
@@ -386,6 +383,22 @@
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"local"
],
"datasets": [
"PASCAL VOC"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Convert and deploy TinyYolo with ONNX Runtime",
"index_order": 5,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -402,7 +415,14 @@
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"star_tag": [
"featured"
],
"tags": [
"ONNX Converter"
],
"task": "Object Detection"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -2,5 +2,6 @@ name: onnx-convert-aml-deploy-tinyyolo
dependencies:
- pip:
- azureml-sdk
- numpy
- git+https://github.com/apple/coremltools@v2.1
- onnxmltools==1.3.1

View File

@@ -391,8 +391,6 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'onnx-demo-emotion'\n",
"print(\"Service\", aci_service_name)\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
@@ -755,6 +753,22 @@
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"local"
],
"datasets": [
"Emotion FER"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Deploy Facial Expression Recognition (FER+) with ONNX Runtime",
"index_order": 2,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -772,7 +786,12 @@
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"msauthor": "vinitra.swamy"
"msauthor": "vinitra.swamy",
"star_tag": [],
"tags": [
"ONNX Model Zoo"
],
"task": "Facial Expression Recognition"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -378,8 +378,6 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"\n",
"aci_service_name = 'onnx-demo-mnist'\n",
"print(\"Service\", aci_service_name)\n",
"aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n",
@@ -763,6 +761,22 @@
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"local"
],
"datasets": [
"MNIST"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Deploy MNIST digit recognition with ONNX Runtime",
"index_order": 1,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -780,7 +794,12 @@
"pygments_lexer": "ipython3",
"version": "3.6.5"
},
"msauthor": "vinitra.swamy"
"msauthor": "vinitra.swamy",
"star_tag": [],
"tags": [
"ONNX Model Zoo"
],
"task": "Image Classification"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -302,7 +302,6 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"from random import randint\n",
"\n",
"aci_service_name = 'onnx-demo-resnet50'+str(randint(0,100))\n",
@@ -372,6 +371,22 @@
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"local"
],
"datasets": [
"ImageNet"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Deploy ResNet50 with ONNX Runtime",
"index_order": 4,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -388,7 +403,12 @@
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"star_tag": [],
"tags": [
"ONNX Model Zoo"
],
"task": "Image Classification"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -477,7 +477,6 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.webservice import Webservice\n",
"from azureml.core.model import Model\n",
"from random import randint\n",
"\n",
@@ -548,6 +547,22 @@
"name": "viswamy"
}
],
"category": "deployment",
"compute": [
"AML Compute"
],
"datasets": [
"MNIST"
],
"deployment": [
"Azure Container Instance"
],
"exclude_from_index": false,
"framework": [
"ONNX"
],
"friendly_name": "Train MNIST in PyTorch, convert, and deploy with ONNX Runtime",
"index_order": 3,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -565,6 +580,11 @@
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"star_tag": [],
"tags": [
"ONNX Converter"
],
"task": "Image Classification",
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {

View File

@@ -235,7 +235,8 @@
"execution_count": null,
"metadata": {
"tags": [
"create image"
"create image",
"sample-image-create"
]
},
"outputs": [],
@@ -330,7 +331,8 @@
"metadata": {
"tags": [
"deploy service",
"aci"
"aci",
"sample-aciwebservice-deploy-config"
]
},
"outputs": [],
@@ -349,7 +351,8 @@
"metadata": {
"tags": [
"deploy service",
"aci"
"aci",
"sample-aciwebservice-deploy-from-image"
]
},
"outputs": [],

View File

@@ -110,7 +110,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"sample-batchcompute-attach"
]
},
"outputs": [],
"source": [
"batch_compute_name = 'mybatchcompute' # Name to associate with new compute in workspace\n",

View File

@@ -88,7 +88,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"sample-adlacompute-attach"
]
},
"outputs": [],
"source": [
"adla_compute_name = 'testadl' # Name to associate with new compute in workspace\n",

View File

@@ -142,7 +142,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"sample-databrickscompute-attach"
]
},
"outputs": [],
"source": [
"# Replace with your account info before running.\n",

View File

@@ -475,7 +475,7 @@
"metadata": {
"authors": [
{
"name": "copeters"
"name": "jamgan"
}
],
"category": "tutorial",

View File

@@ -100,7 +100,7 @@
"\n",
"# Check core SDK version number\n",
"\n",
"print(\"This notebook was created using SDK version 1.0.72.1, you are currently running version\", azureml.core.VERSION)"
"print(\"This notebook was created using SDK version 1.0.74.1, you are currently running version\", azureml.core.VERSION)"
]
},
{

View File

@@ -3,4 +3,4 @@ dependencies:
- pip:
- azureml-sdk
- azureml-tensorboard
- tensorflow<2.0.0
- tensorflow<1.15

View File

@@ -920,6 +920,8 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies\n",
"\n",
"cd = CondaDependencies.create()\n",
"cd.add_tensorflow_conda_package()\n",
"cd.add_conda_package('keras==2.2.5')\n",
@@ -1041,7 +1043,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can retreive the API keys used for accessing the HTTP endpoint."
"We can retrieve the API keys used for accessing the HTTP endpoint."
]
},
{
@@ -1050,7 +1052,7 @@
"metadata": {},
"outputs": [],
"source": [
"# retreive the API keys. two keys were generated.\n",
"# Retrieve the API keys. Two keys were generated.\n",
"key1, Key2 = service.get_keys()\n",
"print(key1)"
]

View File

@@ -1,10 +1,10 @@
name: train-hyperparameter-tune-deploy-with-keras
dependencies:
- matplotlib
- pip:
- azureml-sdk
- azureml-widgets
- tensorflow==1.13.1
- keras==2.2.5
- pandas
- matplotlib==3.0.3
- numpy==1.16.2
- pandas

View File

@@ -133,7 +133,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"sample-hdinsightcompute-attach"
]
},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, HDInsightCompute\n",
@@ -262,6 +266,22 @@
"name": "aashishb"
}
],
"category": "training",
"compute": [
"HDI cluster"
],
"datasets": [
"None"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"framework": [
"PySpark"
],
"friendly_name": "Training in Spark",
"index_order": 1,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -279,26 +299,10 @@
"pygments_lexer": "ipython3",
"version": "3.6.7"
},
"friendly_name": "Training in Spark",
"exclude_from_index": false,
"index_order": 1,
"category": "training",
"task": "Submiting a run on a spark cluster",
"datasets": [
"None"
],
"compute": [
"HDI cluster"
],
"deployment": [
"None"
],
"framework": [
"PySpark"
],
"tags": [
"None"
]
],
"task": "Submiting a run on a spark cluster"
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -203,7 +203,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"sample-amlcompute-provision"
]
},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
@@ -293,7 +297,11 @@
"* `idle_seconds_before_scaledown`: Idle time (default 120 seconds) to wait after run completion before auto-scaling to min_nodes\n",
"* `vnet_resourcegroup_name`: Resource group of the **existing** VNet within which AmlCompute should be provisioned\n",
"* `vnet_name`: Name of VNet\n",
"* `subnet_name`: Name of SubNet within the VNet"
"* `subnet_name`: Name of SubNet within the VNet\n",
"* `admin_username`: Name of Admin user account which will be created on all the nodes of the cluster\n",
"* `admin_user_password`: Password that you want to set for the user account above\n",
"* `admin_user_ssh_key`: SSH Key for the user account above. You can specify either a password or an SSH key or both\n",
"* `remote_login_port_public_access`: Flag to enable or disable the public SSH port. If you dont specify, AmlCompute will smartly close the port when deploying inside a VNet"
]
},
{
@@ -320,7 +328,11 @@
" idle_seconds_before_scaledown='300',\n",
" vnet_resourcegroup_name='<my-resource-group>',\n",
" vnet_name='<my-vnet-name>',\n",
" subnet_name='<my-subnet-name>')\n",
" subnet_name='<my-subnet-name>',\n",
" admin_username='<my-username>',\n",
" admin_user_password='<my-password>',\n",
" admin_user_ssh_key='<my-sshkey>',\n",
" remote_login_port_public_access='enabled')\n",
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n",
"\n",
"cpu_cluster.wait_for_completion(show_output=True)"
@@ -381,10 +393,20 @@
"metadata": {},
"outputs": [],
"source": [
"#Get_status () gets the latest status of the AmlCompute target\n",
"#get_status () gets the latest status of the AmlCompute target\n",
"cpu_cluster.get_status().serialize()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#list_nodes () gets the list of nodes on the cluster with status, IP and associated run\n",
"cpu_cluster.list_nodes()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -425,6 +447,22 @@
"name": "nigup"
}
],
"category": "training",
"compute": [
"AML Compute"
],
"datasets": [
"Diabetes"
],
"deployment": [
"None"
],
"exclude_from_index": false,
"framework": [
"None"
],
"friendly_name": "Train on Azure Machine Learning Compute",
"index_order": 1,
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -442,26 +480,10 @@
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"friendly_name": "Train on Azure Machine Learning Compute",
"exclude_from_index": false,
"index_order": 1,
"category": "training",
"task": "Submit an Azure Machine Leaarning Compute run",
"datasets": [
"Diabetes"
],
"compute": [
"AML Compute"
],
"deployment": [
"None"
],
"framework": [
"None"
],
"tags": [
"None"
]
],
"task": "Submit a run on Azure Machine Learning Compute."
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -243,7 +243,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"tags": [
"sample-remotecompute-attach"
]
},
"outputs": [],
"source": [
"from azureml.core.compute import ComputeTarget, RemoteCompute\n",

View File

@@ -409,7 +409,7 @@
"metadata": {
"authors": [
{
"name": "sihhu"
"name": "jamgan"
}
],
"category": "tutorial",

View File

@@ -107,7 +107,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [Training in Spark](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-in-spark/train-in-spark.ipynb) | Submiting a run on a spark cluster | None | HDI cluster | None | PySpark | None |
| [Train on Azure Machine Learning Compute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb) | Submit an Azure Machine Leaarning Compute run | Diabetes | AML Compute | None | None | None |
| [Train on Azure Machine Learning Compute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb) | Submit a run on Azure Machine Learning Compute. | Diabetes | AML Compute | None | None | None |
| [Train on local compute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-local/train-on-local.ipynb) | Train a model locally | Diabetes | Local | None | None | None |
@@ -132,10 +132,20 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
|Title| Task | Dataset | Training Compute | Deployment Target | ML Framework | Tags |
|:----|:-----|:-------:|:----------------:|:-----------------:|:------------:|:------------:|
| [Deploy MNIST digit recognition with ONNX Runtime](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb) | Image Classification | MNIST | local | Azure Container Instance | ONNX | ONNX Model Zoo |
| [Deploy Facial Expression Recognition (FER+) with ONNX Runtime](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb) | Facial Expression Recognition | Emotion FER | local | Azure Container Instance | ONNX | ONNX Model Zoo |
| :star:[Register model and deploy as webservice](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/deploy-to-cloud/model-register-and-deploy.ipynb) | Deploy a model with Azure Machine Learning | Diabetes | None | Azure Container Instance | Scikit-learn | None |
| [Train MNIST in PyTorch, convert, and deploy with ONNX Runtime](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb) | Image Classification | MNIST | AML Compute | Azure Container Instance | ONNX | ONNX Converter |
| [Deploy ResNet50 with ONNX Runtime](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb) | Image Classification | ImageNet | local | Azure Container Instance | ONNX | ONNX Model Zoo |
| [Deploy a model as a web service using MLflow](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/track-and-monitor-experiments/using-mlflow/deploy-model/deploy-model.ipynb) | Use MLflow with AML | Diabetes | None | Azure Container Instance | Scikit-learn | None |
| :star:[Convert and deploy TinyYolo with ONNX Runtime](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb) | Object Detection | PASCAL VOC | local | Azure Container Instance | ONNX | ONNX Converter |
## Other Notebooks
@@ -191,18 +201,8 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [enable-app-insights-in-production-service](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) | | | | | | |
| [onnx-convert-aml-deploy-tinyyolo](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb) | | | | | | |
| [onnx-inference-facial-expression-recognition-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb) | | | | | | |
| [onnx-inference-mnist-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb) | | | | | | |
| [onnx-model-register-and-deploy](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-model-register-and-deploy.ipynb) | | | | | | |
| [onnx-modelzoo-aml-deploy-resnet50](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb) | | | | | | |
| [onnx-train-pytorch-aml-deploy-mnist](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb) | | | | | | |
| [production-deploy-to-aks](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) | | | | | | |
| [register-model-create-image-deploy-service](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/register-model-create-image-deploy-service/register-model-create-image-deploy-service.ipynb) | | | | | | |

View File

@@ -102,7 +102,7 @@
"source": [
"import azureml.core\n",
"\n",
"print(\"This notebook was created using version 1.0.72.1 of the Azure ML SDK\")\n",
"print(\"This notebook was created using version 1.0.74.1 of the Azure ML SDK\")\n",
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
]
},