mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-23 20:00:06 -05:00
Compare commits
8 Commits
release_up
...
azureml-sd
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ee5d0239a3 | ||
|
|
388111cedc | ||
|
|
b86191ed7f | ||
|
|
22753486de | ||
|
|
cf1d1dbf01 | ||
|
|
2e45d9800d | ||
|
|
a9a8de02ec | ||
|
|
dd8339e650 |
@@ -17,7 +17,7 @@ dependencies:
|
||||
- notebook
|
||||
- pywin32==227
|
||||
- PySocks==1.7.1
|
||||
- Pygments==2.11.2
|
||||
- jsonschema==4.5.1
|
||||
- conda-forge::pyqt==5.12.3
|
||||
|
||||
- pip:
|
||||
|
||||
@@ -9,9 +9,11 @@ dependencies:
|
||||
- PyJWT < 2.0.0
|
||||
- numpy==1.18.5
|
||||
- pywin32==227
|
||||
- cryptography<37.0.0
|
||||
|
||||
- pip:
|
||||
# Required packages for AzureML execution, history, and data preparation.
|
||||
- azure-mgmt-core==1.3.0
|
||||
- azure-core==1.21.1
|
||||
- azure-identity==1.7.0
|
||||
- azureml-defaults
|
||||
|
||||
@@ -11,9 +11,11 @@ dependencies:
|
||||
- urllib3==1.26.7
|
||||
- PyJWT < 2.0.0
|
||||
- numpy==1.19.5
|
||||
- cryptography<37.0.0
|
||||
|
||||
- pip:
|
||||
# Required packages for AzureML execution, history, and data preparation.
|
||||
- azure-mgmt-core==1.3.0
|
||||
- azure-core==1.21.1
|
||||
- azure-identity==1.7.0
|
||||
- azureml-defaults
|
||||
|
||||
@@ -242,6 +242,34 @@
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### 2.4 Configure data with ``OutputFileDatasetConfig`` objects\n",
|
||||
"This step shows how to configure output data from a pipeline step. One of the use cases for this step is when you want to do some preprocessing before feeding the data to training step. Intermediate data (or output of a step) is represented by an ``OutputFileDatasetConfig`` object. ``output_data`` is produced as the output of a step. Optionally, this data can be registered as a dataset by calling the ``register_on_complete`` method. If you create an ``OutputFileDatasetConfig`` in one step and use it as an input to another step, that data dependency between steps creates an implicit execution order in the pipeline.\n",
|
||||
"\n",
|
||||
"``OutputFileDatasetConfig`` objects return a directory, and by default write output to the default datastore of the workspace.\n",
|
||||
"\n",
|
||||
"Since instance creation for class ``OutputTabularDatasetConfig`` is not allowed, we first create an instance of this class. Then we use the ``read_parquet_files`` method to read the parquet file into ``OutputTabularDatasetConfig``."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.data.output_dataset_config import OutputFileDatasetConfig\n",
|
||||
"\n",
|
||||
"output_data = OutputFileDatasetConfig(\n",
|
||||
" name=\"processed_data\", destination=(dstore, \"outputdataset/{run-id}/{output-name}\")\n",
|
||||
").as_upload()\n",
|
||||
"# output_data_dataset = output_data.register_on_complete(\n",
|
||||
"# name='processed_data', description = 'files from prev step')\n",
|
||||
"output_data = output_data.read_parquet_files()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -303,6 +331,48 @@
|
||||
" print(compute_target.status.serialize())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Configure the training run's environment\n",
|
||||
"The next step is making sure that the remote training run has all the dependencies needed by the training steps. Dependencies and the runtime context are set by creating and configuring a RunConfiguration object.\n",
|
||||
"\n",
|
||||
"The code below shows two options for handling dependencies. As presented, with ``USE_CURATED_ENV = True``, the configuration is based on a [curated environment](https://docs.microsoft.com/en-us/azure/machine-learning/resource-curated-environments). Curated environments have prebuilt Docker images in the [Microsoft Container Registry](https://hub.docker.com/publishers/microsoftowner). For more information, see [Azure Machine Learning curated environments](https://docs.microsoft.com/en-us/azure/machine-learning/resource-curated-environments).\n",
|
||||
"\n",
|
||||
"The path taken if you change ``USE_CURATED_ENV`` to False shows the pattern for explicitly setting your dependencies. In that scenario, a new custom Docker image will be created and registered in an Azure Container Registry within your resource group (see [Introduction to private Docker container registries in Azure](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-intro)). Building and registering this image can take quite a few minutes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.runconfig import RunConfiguration\n",
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"from azureml.core import Environment\n",
|
||||
"\n",
|
||||
"aml_run_config = RunConfiguration()\n",
|
||||
"aml_run_config.target = compute_target\n",
|
||||
"\n",
|
||||
"USE_CURATED_ENV = True\n",
|
||||
"if USE_CURATED_ENV:\n",
|
||||
" curated_environment = Environment.get(\n",
|
||||
" workspace=ws, name=\"AzureML-sklearn-0.24-ubuntu18.04-py37-cpu\"\n",
|
||||
" )\n",
|
||||
" aml_run_config.environment = curated_environment\n",
|
||||
"else:\n",
|
||||
" aml_run_config.environment.python.user_managed_dependencies = False\n",
|
||||
"\n",
|
||||
" # Add some packages relied on by data prep step\n",
|
||||
" aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(\n",
|
||||
" conda_packages=[\"pandas\", \"scikit-learn\"],\n",
|
||||
" pip_packages=[\"azureml-sdk\", \"azureml-dataset-runtime[fuse,pandas]\"],\n",
|
||||
" pin_sdk_version=False,\n",
|
||||
" )"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -366,6 +436,46 @@
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Construct your pipeline steps\n",
|
||||
"Once you have the compute resource and environment created, you're ready to define your pipeline's steps. There are many built-in steps available via the Azure Machine Learning SDK, as you can see on the [reference documentation for the azureml.pipeline.steps package](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py). The most flexible class is [PythonScriptStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep?view=azure-ml-py), which runs a Python script.\n",
|
||||
"\n",
|
||||
"Your data preparation code is in a subdirectory (in this example, \"data_preprocessing_tabular.py\" in the directory \"./scripts\"). As part of the pipeline creation process, this directory is zipped and uploaded to the compute_target and the step runs the script specified as the value for ``script_name``.\n",
|
||||
"\n",
|
||||
"The ``arguments`` values specify the inputs and outputs of the step. In the example below, the baseline data is the ``input_ds_small`` dataset. The script data_preprocessing_tabular.py does whatever data-transformation tasks are appropriate to the task at hand and outputs the data to ``output_data``, of type ``OutputFileDatasetConfig``. For more information, see [Moving data into and between ML pipeline steps (Python)](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-move-data-in-out-of-pipelines). The step will run on the machine defined by ``compute_target``, using the configuration ``aml_run_config``.\n",
|
||||
"\n",
|
||||
"Reuse of previous results (``allow_reuse``) is key when using pipelines in a collaborative environment since eliminating unnecessary reruns offers agility. Reuse is the default behavior when the ``script_name``, ``inputs``, and the parameters of a step remain the same. When reuse is allowed, results from the previous run are immediately sent to the next step. If ``allow_reuse`` is set to False, a new run will always be generated for this step during pipeline execution.\n",
|
||||
"\n",
|
||||
"> Note that we only support partitioned FileDataset and TabularDataset without partition when using such output as input."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.steps import PythonScriptStep\n",
|
||||
"\n",
|
||||
"dataprep_source_dir = \"./scripts\"\n",
|
||||
"entry_point = \"data_preprocessing_tabular.py\"\n",
|
||||
"ds_input = input_ds_small.as_named_input(\"train_10_models\")\n",
|
||||
"\n",
|
||||
"data_prep_step = PythonScriptStep(\n",
|
||||
" script_name=entry_point,\n",
|
||||
" source_directory=dataprep_source_dir,\n",
|
||||
" arguments=[\"--input\", ds_input, \"--output\", output_data],\n",
|
||||
" compute_target=compute_target,\n",
|
||||
" runconfig=aml_run_config,\n",
|
||||
" allow_reuse=False,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"input_ds_small = output_data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
||||
@@ -513,13 +513,7 @@
|
||||
"conda_run_config.environment.docker.enabled = True\n",
|
||||
"\n",
|
||||
"# specify CondaDependencies obj\n",
|
||||
"conda_run_config.environment.python.conda_dependencies = (\n",
|
||||
" automl_run.get_environment().python.conda_dependencies\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"conda_run_config.environment.python.conda_dependencies.add_pip_package(\n",
|
||||
" \"dotnetcore2==2.1.23\"\n",
|
||||
")"
|
||||
"conda_run_config.environment = automl_run.get_environment()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -648,28 +642,6 @@
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create the conda dependencies for setting up the service\n",
|
||||
"We need to create the conda dependencies comprising of the *azureml* packages using the training environment from the *automl_run*."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"conda_dep = automl_run.get_environment().python.conda_dependencies\n",
|
||||
"\n",
|
||||
"with open(\"myenv.yml\", \"w\") as f:\n",
|
||||
" f.write(conda_dep.serialize_to_string())\n",
|
||||
"with open(\"myenv.yml\", \"r\") as f:\n",
|
||||
" print(f.read())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -692,7 +664,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Deploy the service\n",
|
||||
"In the cell below, we deploy the service using the conda file and the scoring file from the previous steps. "
|
||||
"In the cell below, we deploy the service using the automl training environment and the scoring file from the previous steps. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -714,7 +686,7 @@
|
||||
" description=\"Get local explanations for Machine test data\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n",
|
||||
"myenv = automl_run.get_environment()\n",
|
||||
"inference_config = InferenceConfig(entry_script=\"score_explain.py\", environment=myenv)\n",
|
||||
"\n",
|
||||
"# Use configs and models generated above\n",
|
||||
|
||||
@@ -361,7 +361,7 @@
|
||||
"\n",
|
||||
"batch_conda_deps = CondaDependencies.create(python_version=\"3.7\",\n",
|
||||
" conda_packages=['pip==20.2.4'],\n",
|
||||
" pip_packages=[\"tensorflow==1.15.2\", \"pillow\", \n",
|
||||
" pip_packages=[\"tensorflow==1.15.2\", \"pillow\", \"protobuf==3.20.1\",\n",
|
||||
" \"azureml-core\", \"azureml-dataset-runtime[fuse]\"])\n",
|
||||
"batch_env = Environment(name=\"batch_environment\")\n",
|
||||
"batch_env.python.conda_dependencies = batch_conda_deps\n",
|
||||
|
||||
@@ -11,6 +11,8 @@ RUN pip install azureml-core
|
||||
RUN pip install ray==0.8.7
|
||||
RUN pip install ray[rllib,tune,serve]==0.8.7
|
||||
RUN pip install tensorflow==1.14.0
|
||||
RUN pip install 'msrest<0.7.0'
|
||||
RUN pip install protobuf==3.20.0
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y jq
|
||||
|
||||
@@ -37,8 +37,7 @@ RUN pip install gym[atari]==0.19.0
|
||||
RUN pip install gym[accept-rom-license]==0.19.0
|
||||
|
||||
# Install pip dependencies
|
||||
RUN HOROVOD_WITH_TENSORFLOW=1 \
|
||||
pip install 'matplotlib>=3.3,<3.4' \
|
||||
RUN pip install 'matplotlib>=3.3,<3.4' \
|
||||
'psutil>=5.8,<5.9' \
|
||||
'tqdm>=4.59,<4.60' \
|
||||
'pandas>=1.1,<1.2' \
|
||||
@@ -70,6 +69,9 @@ RUN pip install --no-cache-dir \
|
||||
# This is required for ray 0.8.7
|
||||
RUN pip install -U aiohttp==3.7.4
|
||||
|
||||
RUN pip install 'msrest<0.7.0'
|
||||
RUN pip install protobuf==3.20.0
|
||||
|
||||
# This is needed for mpi to locate libpython
|
||||
ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
|
||||
|
||||
|
||||
@@ -93,7 +93,7 @@
|
||||
"source": [
|
||||
"%matplotlib inline\n",
|
||||
"\n",
|
||||
"# Azure Machine Learning Core imports\n",
|
||||
"# Azure Machine Learning core imports\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"# Check core SDK version number\n",
|
||||
|
||||
@@ -12,6 +12,7 @@ RUN pip install azureml-dataset-runtime
|
||||
RUN pip install ray==0.8.7
|
||||
RUN pip install ray[rllib,tune,serve]==0.8.7
|
||||
RUN pip install tensorflow==1.14.0
|
||||
RUN pip install 'msrest<0.7.0'
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y jq
|
||||
|
||||
@@ -30,5 +30,9 @@ RUN pip install ray-on-aml==0.1.6 & \
|
||||
conda install -y -c conda-forge x264='1!152.20180717' ffmpeg=4.0.2 && \
|
||||
conda install -c anaconda opencv
|
||||
|
||||
RUN pip install protobuf==3.20.0
|
||||
|
||||
RUN pip install --upgrade ray==0.8.3 \
|
||||
ray[rllib,dashboard,tune]==0.8.3
|
||||
ray[rllib,dashboard,tune]==0.8.3
|
||||
|
||||
RUN pip install 'msrest<0.7.0'
|
||||
@@ -28,7 +28,11 @@ RUN cd multiagent-particle-envs && \
|
||||
|
||||
RUN pip3 install ray-on-aml==0.1.6
|
||||
|
||||
RUN pip install protobuf==3.20.0
|
||||
|
||||
RUN pip3 install --upgrade \
|
||||
ray==0.8.7 \
|
||||
ray[rllib]==0.8.7 \
|
||||
ray[tune]==0.8.7
|
||||
ray[tune]==0.8.7
|
||||
|
||||
RUN pip install 'msrest<0.7.0'
|
||||
@@ -5,17 +5,19 @@ import os
|
||||
import argparse
|
||||
import datetime
|
||||
import time
|
||||
import tensorflow as tf
|
||||
import tensorflow.compat.v1 as tf
|
||||
from math import ceil
|
||||
import numpy as np
|
||||
import sys
|
||||
import shutil
|
||||
from tensorflow.contrib.slim.python.slim.nets import inception_v3
|
||||
import subprocess
|
||||
import tf_slim
|
||||
|
||||
from azureml.core import Run
|
||||
from azureml.core.model import Model
|
||||
from azureml.core.dataset import Dataset
|
||||
|
||||
slim = tf.contrib.slim
|
||||
slim = tf_slim
|
||||
|
||||
image_size = 299
|
||||
num_channel = 3
|
||||
@@ -32,16 +34,18 @@ def get_class_label_dict(labels_dir):
|
||||
|
||||
def init():
|
||||
global g_tf_sess, probabilities, label_dict, input_images
|
||||
subprocess.run(["git", "clone", "https://github.com/tensorflow/models/"])
|
||||
sys.path.append("./models/research/slim")
|
||||
|
||||
parser = argparse.ArgumentParser(description="Start a tensorflow model serving")
|
||||
parser.add_argument('--model_name', dest="model_name", required=True)
|
||||
parser.add_argument('--labels_dir', dest="labels_dir", required=True)
|
||||
args, _ = parser.parse_known_args()
|
||||
|
||||
from nets import inception_v3, inception_utils
|
||||
label_dict = get_class_label_dict(args.labels_dir)
|
||||
classes_num = len(label_dict)
|
||||
|
||||
with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
|
||||
tf.disable_v2_behavior()
|
||||
with slim.arg_scope(inception_utils.inception_arg_scope()):
|
||||
input_images = tf.placeholder(tf.float32, [1, image_size, image_size, num_channel])
|
||||
logits, _ = inception_v3.inception_v3(input_images,
|
||||
num_classes=classes_num,
|
||||
|
||||
@@ -247,7 +247,7 @@
|
||||
" config = AmlCompute.provisioning_configuration(vm_size=\"STANDARD_NC6\",\n",
|
||||
" vm_priority=\"lowpriority\", \n",
|
||||
" min_nodes=0, \n",
|
||||
" max_nodes=1)\n",
|
||||
" max_nodes=2)\n",
|
||||
"\n",
|
||||
" compute_target = ComputeTarget.create(workspace=ws, name=compute_name, provisioning_configuration=config)\n",
|
||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)"
|
||||
@@ -305,9 +305,10 @@
|
||||
"from azureml.core.conda_dependencies import CondaDependencies\n",
|
||||
"from azureml.core.runconfig import DEFAULT_GPU_IMAGE\n",
|
||||
"\n",
|
||||
"cd = CondaDependencies.create(python_version=\"3.7\",\n",
|
||||
"cd = CondaDependencies.create(python_version=\"3.8\",\n",
|
||||
" conda_packages=['pip==20.2.4'],\n",
|
||||
" pip_packages=[\"tensorflow-gpu==1.15.2\",\n",
|
||||
" pip_packages=[\"tensorflow-gpu==2.3.0\",\n",
|
||||
" \"tf_slim==1.1.0\", \"protobuf==3.20.1\",\n",
|
||||
" \"azureml-core\", \"azureml-dataset-runtime[fuse]\"])\n",
|
||||
"\n",
|
||||
"env = Environment(name=\"parallelenv\")\n",
|
||||
|
||||
Reference in New Issue
Block a user