mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 09:37:04 -05:00
396 lines
12 KiB
Plaintext
396 lines
12 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
"\n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Deploy Multiple Models as Webservice\n",
|
|
"\n",
|
|
"This example shows how to deploy a Webservice with multiple models in step-by-step fashion:\n",
|
|
"\n",
|
|
" 1. Register Models\n",
|
|
" 2. Deploy Models as Webservice"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Prerequisites\n",
|
|
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Check core SDK version number\n",
|
|
"import azureml.core\n",
|
|
"\n",
|
|
"print(\"SDK version:\", azureml.core.VERSION)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Initialize Workspace\n",
|
|
"\n",
|
|
"Initialize a workspace object from persisted configuration."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"create workspace"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Workspace\n",
|
|
"\n",
|
|
"ws = Workspace.from_config()\n",
|
|
"print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Register Models"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"In this example, we will be using and registering two models. \n",
|
|
"\n",
|
|
"First we will train two simple models on the [diabetes dataset](https://scikit-learn.org/stable/datasets/index.html#diabetes-dataset) included with scikit-learn, serializing them to files in the current directory."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import joblib\n",
|
|
"import sklearn\n",
|
|
"\n",
|
|
"from sklearn.datasets import load_diabetes\n",
|
|
"from sklearn.linear_model import BayesianRidge, Ridge\n",
|
|
"\n",
|
|
"x, y = load_diabetes(return_X_y=True)\n",
|
|
"\n",
|
|
"first_model = Ridge().fit(x, y)\n",
|
|
"second_model = BayesianRidge().fit(x, y)\n",
|
|
"\n",
|
|
"joblib.dump(first_model, \"first_model.pkl\")\n",
|
|
"joblib.dump(second_model, \"second_model.pkl\")\n",
|
|
"\n",
|
|
"print(\"Trained models using scikit-learn {}.\".format(sklearn.__version__))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now that we have our trained models locally, we will register them as Models with the names `my_first_model` and `my_second_model` in the workspace."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"register model from file"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.model import Model\n",
|
|
"\n",
|
|
"my_model_1 = Model.register(model_path=\"first_model.pkl\",\n",
|
|
" model_name=\"my_first_model\",\n",
|
|
" workspace=ws)\n",
|
|
"\n",
|
|
"my_model_2 = Model.register(model_path=\"second_model.pkl\",\n",
|
|
" model_name=\"my_second_model\",\n",
|
|
" workspace=ws)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Write the Entry Script\n",
|
|
"Write the script that will be used to predict on your models"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Model.get_model_path()\n",
|
|
"\n",
|
|
"To get the paths of your models, use `Model.get_model_path(model_name, version=None, _workspace=None)` method. This method will find the path to a model using the name of the model registered under the workspace.\n",
|
|
"\n",
|
|
"In this example, we do not use the optional arguments `version` and `_workspace`.\n",
|
|
"\n",
|
|
"#### Using environment variable AZUREML_MODEL_DIR\n",
|
|
"\n",
|
|
"In other [examples](../deploy-to-cloud/score.py) with a single model deployment, we use the environment variable `AZUREML_MODEL_DIR` and model file name to get the model path. \n",
|
|
"\n",
|
|
"For single model deployments, this environment variable is the path to the model folder (`./azureml-models/$MODEL_NAME/$VERSION`). When we deploy multiple models, the environment variable is set to the folder containing all models (./azureml-models).\n",
|
|
"\n",
|
|
"If you're using multiple models and you know the versions of the models you deploy, you can use this method to get the model path:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"# Construct the model path using the registered model name, version, and model file name\n",
|
|
"model_1_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'my_first_model', '1', 'first_model.pkl')\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%%writefile score.py\n",
|
|
"import joblib\n",
|
|
"import json\n",
|
|
"import numpy as np\n",
|
|
"\n",
|
|
"from azureml.core.model import Model\n",
|
|
"\n",
|
|
"def init():\n",
|
|
" global model_1, model_2\n",
|
|
" # Here \"my_first_model\" is the name of the model registered under the workspace.\n",
|
|
" # This call will return the path to the .pkl file on the local disk.\n",
|
|
" model_1_path = Model.get_model_path(model_name='my_first_model')\n",
|
|
" model_2_path = Model.get_model_path(model_name='my_second_model')\n",
|
|
" \n",
|
|
" # Deserialize the model files back into scikit-learn models.\n",
|
|
" model_1 = joblib.load(model_1_path)\n",
|
|
" model_2 = joblib.load(model_2_path)\n",
|
|
"\n",
|
|
"# Note you can pass in multiple rows for scoring.\n",
|
|
"def run(raw_data):\n",
|
|
" try:\n",
|
|
" data = json.loads(raw_data)['data']\n",
|
|
" data = np.array(data)\n",
|
|
" \n",
|
|
" # Call predict() on each model\n",
|
|
" result_1 = model_1.predict(data)\n",
|
|
" result_2 = model_2.predict(data)\n",
|
|
"\n",
|
|
" # You can return any JSON-serializable value.\n",
|
|
" return {\"prediction1\": result_1.tolist(), \"prediction2\": result_2.tolist()}\n",
|
|
" except Exception as e:\n",
|
|
" result = str(e)\n",
|
|
" return result"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Create Environment"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can now create and/or use an Environment object when deploying a Webservice. The Environment can have been previously registered with your Workspace, or it will be registered with it as a part of the Webservice deployment. Please note that your environment must include azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.\n",
|
|
"\n",
|
|
"More information can be found in our [using environments notebook](../training/using-environments/using-environments.ipynb)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Environment\n",
|
|
"\n",
|
|
"env = Environment(\"deploytocloudenv\")\n",
|
|
"env.python.conda_dependencies.add_pip_package(\"joblib\")\n",
|
|
"env.python.conda_dependencies.add_pip_package(\"numpy\")\n",
|
|
"env.python.conda_dependencies.add_pip_package(\"scikit-learn=={}\".format(sklearn.__version__))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Create Inference Configuration\n",
|
|
"\n",
|
|
"There is now support for a source directory, you can upload an entire folder from your local machine as dependencies for the Webservice.\n",
|
|
"Note: in that case, environments's entry_script and file_path are relative paths to the source_directory path; myenv.docker.base_dockerfile is a string containing extra docker steps or contents of the docker file.\n",
|
|
"\n",
|
|
"Sample code for using a source directory:\n",
|
|
"\n",
|
|
"```python\n",
|
|
"from azureml.core.environment import Environment\n",
|
|
"from azureml.core.model import InferenceConfig\n",
|
|
"\n",
|
|
"myenv = Environment.from_conda_specification(name='myenv', file_path='env/myenv.yml')\n",
|
|
"\n",
|
|
"# explicitly set base_image to None when setting base_dockerfile\n",
|
|
"myenv.docker.base_image = None\n",
|
|
"# add extra docker commends to execute\n",
|
|
"myenv.docker.base_dockerfile = \"FROM ubuntu\\n RUN echo \\\"hello\\\"\"\n",
|
|
"\n",
|
|
"inference_config = InferenceConfig(source_directory=\"C:/abc\",\n",
|
|
" entry_script=\"x/y/score.py\",\n",
|
|
" environment=myenv)\n",
|
|
"```\n",
|
|
"\n",
|
|
" - file_path: input parameter to Environment constructor. Manages conda and python package dependencies.\n",
|
|
" - env.docker.base_dockerfile: any extra steps you want to inject into docker file\n",
|
|
" - source_directory: holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder\n",
|
|
" - entry_script: contains logic specific to initializing your model and running predictions"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"create image"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.model import InferenceConfig\n",
|
|
"\n",
|
|
"inference_config = InferenceConfig(entry_script=\"score.py\", environment=env)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Deploy Model as Webservice on Azure Container Instance\n",
|
|
"\n",
|
|
"Note that the service creation can take few minutes."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"azuremlexception-remarks-sample"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.webservice import AciWebservice\n",
|
|
"\n",
|
|
"aci_service_name = \"aciservice-multimodel\"\n",
|
|
"\n",
|
|
"deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n",
|
|
"\n",
|
|
"service = Model.deploy(ws, aci_service_name, [my_model_1, my_model_2], inference_config, deployment_config, overwrite=True)\n",
|
|
"service.wait_for_deployment(True)\n",
|
|
"\n",
|
|
"print(service.state)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Test web service"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import json\n",
|
|
"\n",
|
|
"test_sample = json.dumps({'data': x[0:2].tolist()})\n",
|
|
"\n",
|
|
"prediction = service.run(test_sample)\n",
|
|
"\n",
|
|
"print(prediction)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Delete ACI to clean up"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"deploy service",
|
|
"aci"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"service.delete()"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "jenns"
|
|
}
|
|
],
|
|
"kernelspec": {
|
|
"display_name": "Python 3.6",
|
|
"language": "python",
|
|
"name": "python36"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.8"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|