updated pipeline notebooks with expanded tutorial

This commit is contained in:
Dipankar Ray
2018-11-21 20:00:07 -08:00
parent e039b98ee6
commit ef5844fffd
19 changed files with 3793 additions and 286 deletions

View File

@@ -4,8 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
"\n",
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"Licensed under the MIT License."
]
},
@@ -13,7 +12,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook demonstrates how to run batch scoring job. __[Inception-V3 model](https://arxiv.org/abs/1512.00567)__ and unlabeled images from __[ImageNet](http://image-net.org/)__ dataset will be used. It registers a pretrained inception model in model registry then uses the model to do batch scoring on images in a blob container."
"# Using Azure Machine Learning Pipelines for batch prediction\n",
"\n",
"In this notebook we will demonstrate how to run a batch scoring job using Azure Machine Learning pipelines. Our example job will be to take an already-trained image classification model, and run that model on some unlabeled images. The image classification model that we'll use is the __[Inception-V3 model](https://arxiv.org/abs/1512.00567)__ and we'll run this model on unlabeled images from the __[ImageNet](http://image-net.org/)__ dataset. \n",
"\n",
"The outline of this notebook is as follows:\n",
"\n",
"- Register the pretrained inception model into the model registry. \n",
"- Store the dataset images in a blob container.\n",
"- Use the registered model to do batch scoring on the images in the data blob container."
]
},
{
@@ -21,7 +28,24 @@
"metadata": {},
"source": [
"## Prerequisites\n",
"Make sure you go through the [00. Installation and Configuration](./00.configuration.ipynb) Notebook first if you haven't.\n"
"Make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core import Datastore\n",
"from azureml.core import Experiment\n",
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.core.datastore import Datastore\n",
"from azureml.core.runconfig import CondaDependencies, RunConfiguration\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.steps import PythonScriptStep"
]
},
{
@@ -37,13 +61,28 @@
"print('Workspace name: ' + ws.name, \n",
" 'Azure region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up machine learning resources"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up datastores\n",
"First, lets access the datastore that has the model, labels, and images. \n",
"\n",
"# Also create a Project and attach to Workspace\n",
"scripts_folder = \"scripts\"\n",
"### Create a datastore that points to a blob container containing sample images\n",
"\n",
"if not os.path.isdir(scripts_folder):\n",
" os.mkdir(scripts_folder)"
"We have created a public blob container `sampledata` on an account named `pipelinedata`, containing images from the ImageNet evaluation set. In the next step, we create a datastore with the name `images_datastore`, which points to this container. In the call to `register_azure_blob_container` below, setting the `overwrite` flag to `True` overwrites any datastore that was created previously with that name. \n",
"\n",
"This step can be changed to point to your blob container by providing your own `datastore_name`, `container_name`, and `account_name`."
]
},
{
@@ -52,19 +91,72 @@
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
"from azureml.core.datastore import Datastore\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.pipeline.steps import PythonScriptStep\n",
"from azureml.core.runconfig import CondaDependencies, RunConfiguration"
"account_name = \"pipelinedata\"\n",
"datastore_name=\"images_datastore\"\n",
"container_name=\"sampledata\"\n",
"\n",
"batchscore_blob = Datastore.register_azure_blob_container(ws, \n",
" datastore_name=datastore_name, \n",
" container_name= container_name, \n",
" account_name=account_name, \n",
" overwrite=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create and attach Compute targets\n",
"Next, lets specify the default datastore for the outputs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def_data_store = ws.get_default_datastore()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configure data references\n",
"Now you need to add references to the data, as inputs to the appropriate pipeline steps in your pipeline. A data source in a pipeline is represented by a DataReference object. The DataReference object points to data that lives in, or is accessible from, a datastore. We need DataReference objects corresponding to the following: the directory containing the input images, the directory in which the pretrained model is stored, the directory containing the labels, and the output directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_images = DataReference(datastore=batchscore_blob, \n",
" data_reference_name=\"input_images\",\n",
" path_on_datastore=\"batchscoring/images\",\n",
" mode=\"download\"\n",
" )\n",
"model_dir = DataReference(datastore=batchscore_blob, \n",
" data_reference_name=\"input_model\",\n",
" path_on_datastore=\"batchscoring/models\",\n",
" mode=\"download\" \n",
" )\n",
"label_dir = DataReference(datastore=batchscore_blob, \n",
" data_reference_name=\"input_labels\",\n",
" path_on_datastore=\"batchscoring/labels\",\n",
" mode=\"download\" \n",
" )\n",
"output_dir = PipelineData(name=\"scores\", \n",
" datastore=def_data_store, \n",
" output_path_on_compute=\"batchscoring/results\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create and attach Compute targets\n",
"Use the below code to create and attach Compute targets. "
]
},
@@ -77,31 +169,31 @@
"import os\n",
"\n",
"# choose a name for your cluster\n",
"compute_name = os.environ.get(\"BATCHAI_CLUSTER_NAME\", \"gpucluster\")\n",
"compute_min_nodes = os.environ.get(\"BATCHAI_CLUSTER_MIN_NODES\", 0)\n",
"compute_max_nodes = os.environ.get(\"BATCHAI_CLUSTER_MAX_NODES\", 4)\n",
"vm_size = os.environ.get(\"BATCHAI_CLUSTER_SKU\", \"STANDARD_NC6\")\n",
"aml_compute_name = os.environ.get(\"AML_COMPUTE_NAME\", \"gpu-cluster\")\n",
"cluster_min_nodes = os.environ.get(\"AML_COMPUTE_MIN_NODES\", 0)\n",
"cluster_max_nodes = os.environ.get(\"AML_COMPUTE_MAX_NODES\", 1)\n",
"vm_size = os.environ.get(\"AML_COMPUTE_SKU\", \"STANDARD_NC6\")\n",
"\n",
"\n",
"if compute_name in ws.compute_targets:\n",
" compute_target = ws.compute_targets[compute_name]\n",
"if aml_compute_name in ws.compute_targets:\n",
" compute_target = ws.compute_targets[aml_compute_name]\n",
" if compute_target and type(compute_target) is AmlCompute:\n",
" print('found compute target. just use it. ' + compute_name)\n",
" print('found compute target. just use it. ' + aml_compute_name)\n",
"else:\n",
" print('creating a new compute target...')\n",
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = vm_size, # NC6 is GPU-enabled\n",
" vm_priority = 'lowpriority', # optional\n",
" min_nodes = compute_min_nodes, \n",
" max_nodes = compute_max_nodes)\n",
" min_nodes = cluster_min_nodes, \n",
" max_nodes = cluster_max_nodes)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n",
" compute_target = ComputeTarget.create(ws, aml_compute_name, provisioning_config)\n",
" \n",
" # can poll for a minimum number of nodes and for a specific timeout. \n",
" # if no min node count is provided it will use the scale settings for the cluster\n",
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
" \n",
" # For a more detailed view of current BatchAI cluster status, use the 'status' property \n",
" # For a more detailed view of current Azure Machine Learning Compute status, use the 'status' property \n",
" print(compute_target.status.serialize())"
]
},
@@ -109,150 +201,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Python scripts to run"
"## Prepare the Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python scripts that run the batch scoring. `batchai_score.py` takes input images in `dataset_path`, pretrained models in `model_dir` and outputs a `results-label.txt` to `output_dir`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile $scripts_folder/batchai_score.py\n",
"import os\n",
"import argparse\n",
"import datetime,time\n",
"import tensorflow as tf\n",
"from math import ceil\n",
"import numpy as np\n",
"import shutil\n",
"from tensorflow.contrib.slim.python.slim.nets import inception_v3\n",
"from azureml.core.model import Model\n",
"### Download the Model\n",
"\n",
"slim = tf.contrib.slim\n",
"\n",
"parser = argparse.ArgumentParser(description=\"Start a tensorflow model serving\")\n",
"parser.add_argument('--model_name', dest=\"model_name\", required=True)\n",
"parser.add_argument('--label_dir', dest=\"label_dir\", required=True)\n",
"parser.add_argument('--dataset_path', dest=\"dataset_path\", required=True)\n",
"parser.add_argument('--output_dir', dest=\"output_dir\", required=True)\n",
"parser.add_argument('--batch_size', dest=\"batch_size\", type=int, required=True)\n",
"\n",
"args = parser.parse_args()\n",
"\n",
"image_size = 299\n",
"num_channel = 3\n",
"\n",
"# create output directory if it does not exist\n",
"os.makedirs(args.output_dir, exist_ok=True)\n",
"\n",
"def get_class_label_dict(label_file):\n",
" label = []\n",
" proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()\n",
" for l in proto_as_ascii_lines:\n",
" label.append(l.rstrip())\n",
" return label\n",
"\n",
"\n",
"class DataIterator:\n",
" def __init__(self, data_dir):\n",
" self.file_paths = []\n",
" image_list = os.listdir(data_dir)\n",
" total_size = len(image_list)\n",
" self.file_paths = [data_dir + '/' + file_name.rstrip() for file_name in image_list ]\n",
"\n",
" self.labels = [1 for file_name in self.file_paths]\n",
"\n",
" @property\n",
" def size(self):\n",
" return len(self.labels)\n",
"\n",
" def input_pipeline(self, batch_size):\n",
" images_tensor = tf.convert_to_tensor(self.file_paths, dtype=tf.string)\n",
" labels_tensor = tf.convert_to_tensor(self.labels, dtype=tf.int64)\n",
" input_queue = tf.train.slice_input_producer([images_tensor, labels_tensor], shuffle=False)\n",
" labels = input_queue[1]\n",
" images_content = tf.read_file(input_queue[0])\n",
"\n",
" image_reader = tf.image.decode_jpeg(images_content, channels=num_channel, name=\"jpeg_reader\")\n",
" float_caster = tf.cast(image_reader, tf.float32)\n",
" new_size = tf.constant([image_size, image_size], dtype=tf.int32)\n",
" images = tf.image.resize_images(float_caster, new_size)\n",
" images = tf.divide(tf.subtract(images, [0]), [255])\n",
"\n",
" image_batch, label_batch = tf.train.batch([images, labels], batch_size=batch_size, capacity=5 * batch_size)\n",
" return image_batch\n",
"\n",
"def main(_):\n",
" start_time = datetime.datetime.now()\n",
" label_file_name = os.path.join(args.label_dir, \"labels.txt\")\n",
" label_dict = get_class_label_dict(label_file_name)\n",
" classes_num = len(label_dict)\n",
" test_feeder = DataIterator(data_dir=args.dataset_path)\n",
" total_size = len(test_feeder.labels)\n",
" count = 0\n",
" # get model from model registry\n",
" model_path = Model.get_model_path(args.model_name)\n",
" with tf.Session() as sess:\n",
" test_images = test_feeder.input_pipeline(batch_size=args.batch_size)\n",
" with slim.arg_scope(inception_v3.inception_v3_arg_scope()):\n",
" input_images = tf.placeholder(tf.float32, [args.batch_size, image_size, image_size, num_channel])\n",
" logits, _ = inception_v3.inception_v3(input_images,\n",
" num_classes=classes_num,\n",
" is_training=False)\n",
" probabilities = tf.argmax(logits, 1)\n",
"\n",
" sess.run(tf.global_variables_initializer())\n",
" sess.run(tf.local_variables_initializer())\n",
" coord = tf.train.Coordinator()\n",
" threads = tf.train.start_queue_runners(sess=sess, coord=coord)\n",
" saver = tf.train.Saver()\n",
" saver.restore(sess, model_path)\n",
" out_filename = os.path.join(args.output_dir, \"result-labels.txt\")\n",
" with open(out_filename, \"w\") as result_file:\n",
" i = 0\n",
" while count < total_size and not coord.should_stop():\n",
" test_images_batch = sess.run(test_images)\n",
" file_names_batch = test_feeder.file_paths[i*args.batch_size: min(test_feeder.size, (i+1)*args.batch_size)]\n",
" results = sess.run(probabilities, feed_dict={input_images: test_images_batch})\n",
" new_add = min(args.batch_size, total_size-count)\n",
" count += new_add\n",
" i += 1\n",
" for j in range(new_add):\n",
" result_file.write(os.path.basename(file_names_batch[j]) + \": \" + label_dict[results[j]] + \"\\n\")\n",
" result_file.flush()\n",
" coord.request_stop()\n",
" coord.join(threads)\n",
" \n",
" # copy the file to artifacts\n",
" shutil.copy(out_filename, \"./outputs/\")\n",
" # Move the processed data out of the blob so that the next run can process the data.\n",
"\n",
"if __name__ == \"__main__\":\n",
" tf.app.run()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare Model and Input data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download Model\n",
"\n",
"Download and extract model from http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz to `\"models\"`"
"Download and extract the model from http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz to `\"models\"`"
]
},
{
@@ -286,99 +244,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create a datastore that points to blob container containing sample images\n",
"\n",
"We have created a public blob container `sampledata` on an account named `pipelinedata` containing images from ImageNet evaluation set. In the next step, we create a datastore with name `images_datastore` that points to this container. The `overwrite=True` step overwrites any datastore that was created previously with that name. \n",
"\n",
"This step can be changed to point to your blob container by providing an additional `account_key` parameter with `account_name`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"account_name = \"pipelinedata\"\n",
"sample_data = Datastore.register_azure_blob_container(ws, datastore_name=\"images_datastore\", container_name=\"sampledata\", \n",
" account_name=account_name, \n",
" overwrite=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Output datastore"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We write the outputs to the default datastore"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"default_ds = ws.get_default_datastore()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Specify where the data is stored or will be written to"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azureml.core.conda_dependencies import CondaDependencies\n",
"from azureml.data.data_reference import DataReference\n",
"from azureml.pipeline.core import Pipeline, PipelineData\n",
"from azureml.core import Datastore\n",
"from azureml.core import Experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_images = DataReference(datastore=sample_data, \n",
" data_reference_name=\"input_images\",\n",
" path_on_datastore=\"batchscoring/images\",\n",
" mode=\"download\"\n",
" )\n",
"model_dir = DataReference(datastore=sample_data, \n",
" data_reference_name=\"input_model\",\n",
" path_on_datastore=\"batchscoring/models\",\n",
" mode=\"download\" \n",
" )\n",
"label_dir = DataReference(datastore=sample_data, \n",
" data_reference_name=\"input_labels\",\n",
" path_on_datastore=\"batchscoring/labels\",\n",
" mode=\"download\" \n",
" )\n",
"output_dir = PipelineData(name=\"scores\", \n",
" datastore=default_ds, \n",
" output_path_on_compute=\"batchscoring/results\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Register the model with Workspace"
"### Register the model with Workspace"
]
},
{
@@ -404,7 +270,40 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Specify environment to run the script"
"## Write your scoring script"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To do the scoring, we use a batch scoring script `batch_scoring.py`, which is located in the same directory that this notebook is in. You can take a look at this script to see how you might modify it for your custom batch scoring task.\n",
"\n",
"The python script `batch_scoring.py` takes input images, applies the image classification model to these images, and outputs a classification result to a results file.\n",
"\n",
"The script `batch_scoring.py` takes the following parameters:\n",
"\n",
"- `--model_name`: the name of the model being used, which is expected to be in the `model_dir` directory\n",
"- `--label_dir` : the directory holding the `labels.txt` file \n",
"- `--dataset_path`: the directory containing the input images\n",
"- `--output_dir` : the script will run the model on the data and output a `results-label.txt` to this directory\n",
"- `--batch_size` : the batch size used in running the model.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build and run the batch scoring pipeline\n",
"You have everything you need to build the pipeline. Lets put all these together."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Specify the environment to run the script\n",
"Specify the conda dependencies for your script. You will need this object when you create the pipeline step later on."
]
},
{
@@ -418,24 +317,18 @@
"cd = CondaDependencies.create(pip_packages=[\"tensorflow-gpu==1.10.0\", \"azureml-defaults\"])\n",
"\n",
"# Runconfig\n",
"batchai_run_config = RunConfiguration(conda_dependencies=cd)\n",
"batchai_run_config.environment.docker.enabled = True\n",
"batchai_run_config.environment.docker.gpu_support = True\n",
"batchai_run_config.environment.docker.base_image = DEFAULT_GPU_IMAGE\n",
"batchai_run_config.environment.spark.precache_packages = False"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Steps to run"
"amlcompute_run_config = RunConfiguration(conda_dependencies=cd)\n",
"amlcompute_run_config.environment.docker.enabled = True\n",
"amlcompute_run_config.environment.docker.gpu_support = True\n",
"amlcompute_run_config.environment.docker.base_image = DEFAULT_GPU_IMAGE\n",
"amlcompute_run_config.environment.spark.precache_packages = False"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Specify the parameters for your pipeline\n",
"A subset of the parameters to the python script can be given as input when we re-run a `PublishedPipeline`. In the current example, we define `batch_size` taken by the script as such parameter."
]
},
@@ -449,6 +342,14 @@
"batch_size_param = PipelineParameter(name=\"param_batch_size\", default_value=20)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create the pipeline step\n",
"Create the pipeline step using the script, environment configuration, and parameters. Specify the compute target you already attached to your workspace as the target of execution of the script. We will use PythonScriptStep to create the pipeline step."
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -458,8 +359,8 @@
"inception_model_name = \"inception_v3.ckpt\"\n",
"\n",
"batch_score_step = PythonScriptStep(\n",
" name=\"batch ai scoring\",\n",
" script_name=\"batchai_score.py\",\n",
" name=\"batch_scoring\",\n",
" script_name=\"batch_scoring.py\",\n",
" arguments=[\"--dataset_path\", input_images, \n",
" \"--model_name\", \"inception\",\n",
" \"--label_dir\", label_dir, \n",
@@ -468,11 +369,18 @@
" compute_target=compute_target,\n",
" inputs=[input_images, label_dir],\n",
" outputs=[output_dir],\n",
" runconfig=batchai_run_config,\n",
" source_directory=scripts_folder\n",
" runconfig=amlcompute_run_config\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run the pipeline\n",
"At this point you can run the pipeline and examine the output it produced. "
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -487,7 +395,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Monitor run"
"### Monitor the run"
]
},
{
@@ -513,7 +421,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Download and review output"
"### Download and review output"
]
},
{
@@ -542,14 +450,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Publish a pipeline and rerun using a REST call"
"## Publish a pipeline and rerun using a REST call"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a published pipeline"
"### Create a published pipeline\n",
"Once you are satisfied with the outcome of the run, you can publish the pipeline to run it with different input values later. When you publish a pipeline, you will get a REST endpoint that accepts invoking of the pipeline with the set of parameters you have already incorporated above using PipelineParameter."
]
},
{
@@ -559,7 +468,7 @@
"outputs": [],
"source": [
"published_pipeline = pipeline_run.publish_pipeline(\n",
" name=\"Inception v3 scoring\", description=\"Batch scoring using Inception v3 model\", version=\"1.0\")\n",
" name=\"Inception_v3_scoring\", description=\"Batch scoring using Inception v3 model\", version=\"1.0\")\n",
"\n",
"published_id = published_pipeline.id"
]
@@ -568,14 +477,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Rerun using REST call"
"## Rerun the pipeline using the REST endpoint"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get AAD token"
"### Get AAD token"
]
},
{
@@ -595,7 +504,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run published pipeline using its REST endpoint"
"### Run published pipeline"
]
},
{
@@ -619,7 +528,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Monitor the new run"
"### Monitor the new run"
]
},
{
@@ -642,9 +551,9 @@
}
],
"kernelspec": {
"display_name": "Python 3.6",
"display_name": "Python 3",
"language": "python",
"name": "python36"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -656,7 +565,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
"version": "3.6.7"
}
},
"nbformat": 4,