mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 17:45:10 -05:00
343 lines
11 KiB
Plaintext
343 lines
11 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
"\n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Register Spark Model and deploy as Webservice\n",
|
|
"\n",
|
|
"This example shows how to deploy a Webservice in step-by-step fashion:\n",
|
|
"\n",
|
|
" 1. Register Spark Model\n",
|
|
" 2. Deploy Spark Model as Webservice"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Prerequisites\n",
|
|
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Check core SDK version number\n",
|
|
"import azureml.core\n",
|
|
"\n",
|
|
"print(\"SDK version:\", azureml.core.VERSION)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Initialize Workspace\n",
|
|
"\n",
|
|
"Initialize a workspace object from persisted configuration."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"create workspace"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Workspace\n",
|
|
"\n",
|
|
"ws = Workspace.from_config()\n",
|
|
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Register Model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can add tags and descriptions to your Models. Note you need to have a `iris.model` file in the current directory. This model file is generated using [train in spark](../training/train-in-spark/train-in-spark.ipynb) notebook. The below call registers that file as a Model with the same name `iris.model` in the workspace.\n",
|
|
"\n",
|
|
"Using tags, you can track useful information such as the name and version of the machine learning library used to train the model. Note that tags must be alphanumeric."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"register model from file"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.model import Model\n",
|
|
"\n",
|
|
"model = Model.register(model_path=\"iris.model\",\n",
|
|
" model_name=\"iris.model\",\n",
|
|
" tags={'type': \"regression\"},\n",
|
|
" description=\"Logistic regression model to predict iris species\",\n",
|
|
" workspace=ws)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Fetch Environment"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can now create and/or use an Environment object when deploying a Webservice. The Environment can have been previously registered with your Workspace, or it will be registered with it as a part of the Webservice deployment.\n",
|
|
"\n",
|
|
"In this notebook, we will be using 'AzureML-PySpark-MmlSpark-0.15', a curated environment.\n",
|
|
"\n",
|
|
"More information can be found in our [using environments notebook](../training/using-environments/using-environments.ipynb)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Environment\n",
|
|
"\n",
|
|
"env = Environment.get(ws, name='AzureML-PySpark-MmlSpark-0.15')\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Create Inference Configuration\n",
|
|
"\n",
|
|
"There is now support for a source directory, you can upload an entire folder from your local machine as dependencies for the Webservice.\n",
|
|
"Note: in that case, your entry_script is relative path to the source_directory path.\n",
|
|
"\n",
|
|
"Sample code for using a source directory:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"inference_config = InferenceConfig(source_directory=\"C:/abc\",\n",
|
|
" entry_script=\"x/y/score.py\",\n",
|
|
" environment=environment)\n",
|
|
"```\n",
|
|
"\n",
|
|
" - source_directory = holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder\n",
|
|
" - entry_script = contains logic specific to initializing your model and running predictions\n",
|
|
" - environment = An environment object to use for the deployment. Doesn't have to be registered"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"create image"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.model import InferenceConfig\n",
|
|
"\n",
|
|
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=env)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Deploy Model as Webservice on Azure Container Instance\n",
|
|
"\n",
|
|
"Note that the service creation can take few minutes."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"azuremlexception-remarks-sample"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.webservice import AciWebservice, Webservice\n",
|
|
"from azureml.exceptions import WebserviceException\n",
|
|
"\n",
|
|
"deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
|
|
"aci_service_name = 'aciservice1'\n",
|
|
"\n",
|
|
"try:\n",
|
|
" # if you want to get existing service below is the command\n",
|
|
" # since aci name needs to be unique in subscription deleting existing aci if any\n",
|
|
" # we use aci_service_name to create azure aci\n",
|
|
" service = Webservice(ws, name=aci_service_name)\n",
|
|
" if service:\n",
|
|
" service.delete()\n",
|
|
"except WebserviceException as e:\n",
|
|
" print()\n",
|
|
"\n",
|
|
"service = Model.deploy(ws, aci_service_name, [model], inference_config, deployment_config)\n",
|
|
"\n",
|
|
"service.wait_for_deployment(True)\n",
|
|
"print(service.state)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Test web service"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import json\n",
|
|
"test_sample = json.dumps({'features':{'type':1,'values':[4.3,3.0,1.1,0.1]},'label':2.0})\n",
|
|
"\n",
|
|
"test_sample_encoded = bytes(test_sample, encoding='utf8')\n",
|
|
"prediction = service.run(input_data=test_sample_encoded)\n",
|
|
"print(prediction)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Delete ACI to clean up"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"deploy service",
|
|
"aci"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"service.delete()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Model Profiling\n",
|
|
"\n",
|
|
"You can also take advantage of the profiling feature to estimate CPU and memory requirements for models.\n",
|
|
"\n",
|
|
"```python\n",
|
|
"profile = Model.profile(ws, \"profilename\", [model], inference_config, test_sample)\n",
|
|
"profile.wait_for_profiling(True)\n",
|
|
"profiling_results = profile.get_results()\n",
|
|
"print(profiling_results)\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Model Packaging\n",
|
|
"\n",
|
|
"If you want to build a Docker image that encapsulates your model and its dependencies, you can use the model packaging option. The output image will be pushed to your workspace's ACR.\n",
|
|
"\n",
|
|
"You must include an Environment object in your inference configuration to use `Model.package()`.\n",
|
|
"\n",
|
|
"```python\n",
|
|
"package = Model.package(ws, [model], inference_config)\n",
|
|
"package.wait_for_creation(show_output=True) # Or show_output=False to hide the Docker build logs.\n",
|
|
"package.pull()\n",
|
|
"```\n",
|
|
"\n",
|
|
"Instead of a fully-built image, you can also generate a Dockerfile and download all the assets needed to build an image on top of your Environment.\n",
|
|
"\n",
|
|
"```python\n",
|
|
"package = Model.package(ws, [model], inference_config, generate_dockerfile=True)\n",
|
|
"package.wait_for_creation(show_output=True)\n",
|
|
"package.save(\"./local_context_dir\")\n",
|
|
"```"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "vaidyas"
|
|
}
|
|
],
|
|
"category": "deployment",
|
|
"compute": [
|
|
"None"
|
|
],
|
|
"datasets": [
|
|
"Iris"
|
|
],
|
|
"deployment": [
|
|
"Azure Container Instance"
|
|
],
|
|
"exclude_from_index": false,
|
|
"framework": [
|
|
"PySpark"
|
|
],
|
|
"friendly_name": "Register Spark model and deploy as webservice",
|
|
"kernelspec": {
|
|
"display_name": "Python 3.6",
|
|
"language": "python",
|
|
"name": "python36"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.2"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
} |