mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-20 01:27:06 -05:00
566 lines
23 KiB
Plaintext
566 lines
23 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
"\n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Reinforcement Learning in Azure Machine Learning - Training multiple agents on collaborative ParticleEnv tasks\n",
|
|
"\n",
|
|
"This tutorial will show you how to train policies in a multi-agent scenario.\n",
|
|
"We use OpenAI Gym's [Particle environments](https://github.com/openai/multiagent-particle-envs),\n",
|
|
"which model agents and landmarks in a two-dimensional world. Particle comes with\n",
|
|
"several predefined scenarios, both competitive and collaborative, and with or without communication.\n",
|
|
"\n",
|
|
"For this tutorial, we pick a cooperative navigation scenario where N agents are in a world with N\n",
|
|
"landmarks. The agents' goal is to cover all the landmarks without collisions,\n",
|
|
"so agents must learn to avoid each other (social distancing!). The video below shows training\n",
|
|
"results for N=3 agents/landmarks:\n",
|
|
"\n",
|
|
"<table style=\"width:50%\">\n",
|
|
" <tr>\n",
|
|
" <th style=\"text-align: center;\">\n",
|
|
" <img src=\"./images/particle_simple_spread.gif\" alt=\"Particle video\" align=\"middle\" margin-left=\"auto\" margin-right=\"auto\"/>\n",
|
|
" </th>\n",
|
|
" </tr>\n",
|
|
" <tr style=\"text-align: center;\">\n",
|
|
" <th>Fig 1. Video of 3 agents covering 3 landmarks in a multiagent Particle scenario.</th>\n",
|
|
" </tr>\n",
|
|
"</table>\n",
|
|
"\n",
|
|
"The tutorial will cover the following steps:\n",
|
|
"- Initializing Azure Machine Learning resources for training\n",
|
|
"- Training policies in a multi-agent environment with Azure Machine Learning service\n",
|
|
"- Monitoring training progress\n",
|
|
"\n",
|
|
"## Prerequisites\n",
|
|
"\n",
|
|
"The user should have completed the Azure Machine Learning introductory tutorial. You will need to make sure that you have a valid subscription id, a resource group and a workspace. For detailed instructions see [Tutorial: Get started creating your first ML experiment](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup).\n",
|
|
"\n",
|
|
"Please ensure that you have a current version of IPython (>= 7.15) installed.\n",
|
|
"\n",
|
|
"While this is a standalone notebook, we highly recommend going over the introductory notebooks for RL first.\n",
|
|
"- Getting started:\n",
|
|
" - [RL using a compute instance with Azure Machine Learning](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/cartpole-on-compute-instance/cartpole_ci.ipynb)\n",
|
|
" - [RL using Azure Machine Learning compute](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/cartpole-on-single-compute/cartpole_sc.ipynb)\n",
|
|
"- [Scaling RL training runs with Azure Machine Learning](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb)\n",
|
|
"\n",
|
|
"## Initialize resources\n",
|
|
"\n",
|
|
"All required Azure Machine Learning service resources for this tutorial can be set up from Jupyter. This includes:\n",
|
|
"\n",
|
|
"- Connecting to your existing Azure Machine Learning workspace.\n",
|
|
"- Creating an experiment to track runs.\n",
|
|
"- Creating remote compute targets for [Ray](https://docs.ray.io/en/latest/index.html).\n",
|
|
"\n",
|
|
"\n",
|
|
"### Azure Machine Learning SDK\n",
|
|
"\n",
|
|
"Display the Azure Machine Learning SDK version."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1646249589452
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import azureml.core\n",
|
|
"print('Azure Machine Learning SDK version: ', azureml.core.VERSION)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Connect to workspace\n",
|
|
"\n",
|
|
"Get a reference to an existing Azure Machine Learning workspace."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1646250284486
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Workspace\n",
|
|
"\n",
|
|
"ws = Workspace.from_config()\n",
|
|
"print(ws.name, ws.location, ws.resource_group, sep=' | ')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Create an experiment\n",
|
|
"\n",
|
|
"Create an experiment to track the runs in your workspace. A\n",
|
|
"workspace can have multiple experiments and each experiment\n",
|
|
"can be used to track multiple runs (see [documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py)\n",
|
|
"for details)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1646250342411
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Experiment\n",
|
|
"\n",
|
|
"exp = Experiment(workspace=ws, name='particle-multiagent')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Create or attach an existing compute resource\n",
|
|
"\n",
|
|
"A compute target is a designated compute resource where you run your training script. For more information, see [What are compute targets in Azure Machine Learning service?](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target).\n",
|
|
"\n",
|
|
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
|
|
"\n",
|
|
"#### CPU target for Ray head\n",
|
|
"\n",
|
|
"In the experiment setup for this tutorial, the Ray head node will\n",
|
|
"run on a CPU node (D3 type). A maximum cluster size of 1 node is\n",
|
|
"therefore sufficient. If you wish to run multiple experiments in\n",
|
|
"parallel using the same CPU cluster, you may elect to increase this\n",
|
|
"number. The cluster will automatically scale down to 0 nodes when\n",
|
|
"no training jobs are scheduled (see min_nodes).\n",
|
|
"\n",
|
|
"The code below creates a compute cluster of D3 type nodes.\n",
|
|
"If the cluster with the specified name is already in your workspace\n",
|
|
"the code will skip the creation process.\n",
|
|
"\n",
|
|
"**Note: Creation of a compute resource can take several minutes**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1646250346756
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
|
|
"\n",
|
|
"cpu_cluster_name = 'cpu-cl-d3'\n",
|
|
"\n",
|
|
"if cpu_cluster_name in ws.compute_targets:\n",
|
|
" cpu_cluster = ws.compute_targets[cpu_cluster_name]\n",
|
|
" if cpu_cluster and type(cpu_cluster) is AmlCompute:\n",
|
|
" if cpu_cluster.provisioning_state == 'Succeeded':\n",
|
|
" print('Found existing compute target for {}. Using it.'.format(cpu_cluster_name))\n",
|
|
" else: \n",
|
|
" raise Exception('Found existing compute target for {} '.format(cpu_cluster_name)\n",
|
|
" + 'but it is in state {}'.format(cpu_cluster.provisioning_state))\n",
|
|
"else:\n",
|
|
" print('Creating a new compute target for {}...'.format(cpu_cluster_name))\n",
|
|
" provisioning_config = AmlCompute.provisioning_configuration(\n",
|
|
" vm_size='STANDARD_D3',\n",
|
|
" min_nodes=0, \n",
|
|
" max_nodes=1)\n",
|
|
"\n",
|
|
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, provisioning_config)\n",
|
|
" cpu_cluster.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
|
" \n",
|
|
" print('Cluster created.')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Training the policies\n",
|
|
"\n",
|
|
"### Training environment\n",
|
|
"\n",
|
|
"This tutorial uses a custom docker image\n",
|
|
"with the necessary software installed. The [Environment](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-environments)\n",
|
|
"class stores the configuration for the training environment. The\n",
|
|
"docker image is set via `env.docker.base_image`.\n",
|
|
"`user_managed_dependencies` is set so that\n",
|
|
"the preinstalled Python packages in the image are preserved.\n",
|
|
"\n",
|
|
"Note that since we want to capture videos of the training runs requiring a display, we set the interpreter_path such that the Python process is started via **xvfb-run**."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1646257481631
|
|
},
|
|
"jupyter": {
|
|
"outputs_hidden": false,
|
|
"source_hidden": false
|
|
},
|
|
"nteract": {
|
|
"transient": {
|
|
"deleting": false
|
|
}
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Environment\n",
|
|
"import os\n",
|
|
"\n",
|
|
"ray_environment_name = 'particle-cpu'\n",
|
|
"ray_environment_dockerfile_path = os.path.join(os.getcwd(), 'docker', 'cpu', 'Dockerfile')\n",
|
|
"ray_environment = Environment. \\\n",
|
|
" from_dockerfile(name=ray_environment_name, dockerfile=ray_environment_dockerfile_path). \\\n",
|
|
" register(workspace=ws)\n",
|
|
"ray_cpu_build_details = ray_environment.build(workspace=ws)\n",
|
|
"\n",
|
|
"ray_cpu_build_details.wait_for_completion(show_output=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Training script\n",
|
|
"\n",
|
|
"This tutorial uses the multiagent algorithm [Multi-Agent Deep Deterministic Policy Gradient (MADDPG)](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=maddpg#multi-agent-deep-deterministic-policy-gradient-contrib-maddpg).\n",
|
|
"For training policies in a multiagent scenario, Ray's RLlib also\n",
|
|
"requires the `multiagent` configuration section to be specified. You\n",
|
|
"can find more information in the [common parameters](https://docs.ray.io/en/latest/rllib-training.html?highlight=multiagent#common-parameters)\n",
|
|
"documentation.\n",
|
|
"\n",
|
|
"The stopping criteria are set such that the training run is\n",
|
|
"terminated after either a mean reward of -450 is observed, or\n",
|
|
"training has run for over 2 hours.\n",
|
|
"\n",
|
|
"### Submitting a training run\n",
|
|
"\n",
|
|
"You can submit the training run using a `ScriptRunConfig`. By providing the\n",
|
|
"command to run the training, and a `RunConfig` object configured with your\n",
|
|
"compute target, number of nodes, and environment image to use.\n",
|
|
"\n",
|
|
"Note that you can use the same notebook and scripts to experiment with\n",
|
|
"different Particle environments. You can find a list of supported\n",
|
|
"environments [here](https://github.com/openai/multiagent-particle-envs/tree/master#list-of-environments).\n",
|
|
"Simply change the `--scenario` parameter to a supported scenario.\n",
|
|
"\n",
|
|
"In order to get the best training results, you can also adjust the\n",
|
|
"`--final-reward` parameter to determine when to stop training. A greater\n",
|
|
"reward means longer running time, but improved results. By default,\n",
|
|
"the final reward will be -450, which should show good progress after\n",
|
|
"about one hour of run time.\n",
|
|
"\n",
|
|
"For this notebook, we use a single D3 nodes, giving us a total of 4 CPUs and\n",
|
|
"0 GPUs. One CPU is used by the MADDPG trainer, and an additional CPU is\n",
|
|
"consumed by the RLlib rollout worker. The other 2 CPUs are not used, though\n",
|
|
"smaller node types will run out of memory for this task."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1646275371701
|
|
},
|
|
"jupyter": {
|
|
"outputs_hidden": false,
|
|
"source_hidden": false
|
|
},
|
|
"nteract": {
|
|
"transient": {
|
|
"deleting": false
|
|
}
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import RunConfiguration, ScriptRunConfig, Experiment\n",
|
|
"from azureml.core.runconfig import DockerConfiguration, RunConfiguration\n",
|
|
"from azureml.widgets import RunDetails\n",
|
|
"\n",
|
|
"experiment_name = 'particle-multiagent'\n",
|
|
"\n",
|
|
"experiment = Experiment(workspace=ws, name=experiment_name)\n",
|
|
"\n",
|
|
"aml_run_config_ml = RunConfiguration(communicator='OpenMpi')\n",
|
|
"aml_run_config_ml.target = cpu_cluster\n",
|
|
"aml_run_config_ml.node_count = 1\n",
|
|
"aml_run_config_ml.environment = ray_environment\n",
|
|
"\n",
|
|
"config = ScriptRunConfig(source_directory='./files',\n",
|
|
" command=[\n",
|
|
" 'xvfb-run -s \"-screen 0 640x480x16 -ac +extension GLX +render\" python',\n",
|
|
" 'particle_train.py',\n",
|
|
" '--scenario', 'simple_spread',\n",
|
|
" '--final-reward', '-450'\n",
|
|
" ],\n",
|
|
" run_config = aml_run_config_ml\n",
|
|
" )\n",
|
|
"train_run = experiment.submit(config)\n",
|
|
"\n",
|
|
"RunDetails(train_run).show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Job cancellation\n",
|
|
"\n",
|
|
"You may cancel the job by uncommenting and running the cell below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# If you wish to cancel the run before it completes, uncomment and execute:\n",
|
|
"# train_run.cancel()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Monitoring training progress\n",
|
|
"\n",
|
|
"### View the Tensorboard\n",
|
|
"\n",
|
|
"The Tensorboard can be displayed via the Azure Machine Learning\n",
|
|
"service's [Tensorboard API](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-tensorboard).\n",
|
|
"When running locally, please make sure to follow the instructions\n",
|
|
"in the link and install required packages. Running this cell will output a URL for the Tensorboard.\n",
|
|
"\n",
|
|
"Note that the training script sets the log directory when\n",
|
|
"starting RLlib via the local_dir parameter. ./logs will automatically\n",
|
|
"appear in the downloadable files for a run. Since this script is\n",
|
|
"executed on the Ray head node run, we need to get a reference to it\n",
|
|
"as shown below.\n",
|
|
"\n",
|
|
"The Tensorboard API will continuously stream logs from the run.\n",
|
|
"\n",
|
|
"**Note: It may take a couple of minutes after the run is in \"Running\"\n",
|
|
"state before Tensorboard files are available and the board will refresh automatically**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# from azureml.tensorboard import Tensorboard\n",
|
|
"\n",
|
|
"# tb = Tensorboard([train_run])\n",
|
|
"# tb.start()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### View training videos\n",
|
|
"\n",
|
|
"As mentioned above, we record videos of the agents interacting with the\n",
|
|
"Particle world. These videos are often a crucial indicator for training\n",
|
|
"success. The code below downloads the latest video as it becomes available\n",
|
|
"and displays it in-line.\n",
|
|
"\n",
|
|
"Over time, the agents learn to cooperate and avoid collisions while\n",
|
|
"traveling to all landmarks.\n",
|
|
"\n",
|
|
"**Note: It can take several minutes for a video to appear after the run\n",
|
|
"was started.**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"from azureml.core import Dataset\n",
|
|
"from azureml.data.dataset_error_handling import DatasetValidationError\n",
|
|
"\n",
|
|
"from IPython.display import clear_output\n",
|
|
"from IPython.core.display import display, Video\n",
|
|
"\n",
|
|
"datastore = ws.datastores['workspaceartifactstore']\n",
|
|
"path_prefix = './tmp_videos'\n",
|
|
"\n",
|
|
"def download_latest_training_video(run, video_checkpoint_counter):\n",
|
|
" run_artifacts_path = os.path.join('ExperimentRun', f'dcid.{run.id}', 'logs', 'videos')\n",
|
|
" \n",
|
|
" try:\n",
|
|
" run_artifacts_ds = Dataset.File.from_files(datastore.path(os.path.join(run_artifacts_path, '**')))\n",
|
|
" except DatasetValidationError as e:\n",
|
|
" # This happens at the start of the run when there is no data available\n",
|
|
" # in the run's artifacts\n",
|
|
" return None, video_checkpoint_counter\n",
|
|
" \n",
|
|
" video_files = [file for file in run_artifacts_ds.to_path() if file.endswith('.mp4')]\n",
|
|
" if len(video_files) == video_checkpoint_counter:\n",
|
|
" return None, video_checkpoint_counter\n",
|
|
" \n",
|
|
" iteration_numbers = [int(vf[vf.rindex('video') + len('video') : vf.index('.mp4')]) for vf in video_files]\n",
|
|
" latest_video = next(vf for vf in video_files if vf.endswith('{num}.mp4'.format(num=max(iteration_numbers))))\n",
|
|
" latest_video = os.path.join(run_artifacts_path, os.path.normpath(latest_video[1:]))\n",
|
|
" \n",
|
|
" datastore.download(\n",
|
|
" target_path=path_prefix,\n",
|
|
" prefix=latest_video.replace('\\\\', '/'),\n",
|
|
" show_progress=False)\n",
|
|
" \n",
|
|
" return os.path.join(path_prefix, latest_video), len(video_files)\n",
|
|
"\n",
|
|
"\n",
|
|
"def render_video(vf):\n",
|
|
" clear_output(wait=True)\n",
|
|
" display(Video(data=vf, embed=True, html_attributes='loop autoplay controls width=50%'))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import shutil, time\n",
|
|
"\n",
|
|
"terminal_statuses = ['Canceled', 'Completed', 'Failed']\n",
|
|
"video_checkpoint_counter = 0\n",
|
|
"\n",
|
|
"while train_run.get_status() not in terminal_statuses:\n",
|
|
" video_file, video_checkpoint_counter = download_latest_training_video(train_run, video_checkpoint_counter)\n",
|
|
" if video_file is not None:\n",
|
|
" render_video(video_file)\n",
|
|
" \n",
|
|
" print('Displaying video number {}'.format(video_checkpoint_counter))\n",
|
|
" shutil.rmtree(path_prefix)\n",
|
|
" \n",
|
|
" # Interrupting the kernel can take up to 15 seconds\n",
|
|
" # depending on when time.sleep started\n",
|
|
" time.sleep(15)\n",
|
|
" \n",
|
|
"train_run.wait_for_completion()\n",
|
|
"print('The training run has reached a terminal status.')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Cleaning up\n",
|
|
"\n",
|
|
"Below, you can find code snippets for your convenience to clean up any resources created as part of this tutorial you don't wish to retain."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# to stop the Tensorboard, uncomment and run\n",
|
|
"# tb.stop()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# to delete the cpu compute target, uncomment and run\n",
|
|
"# cpu_cluster.delete()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Next steps\n",
|
|
"\n",
|
|
"We would love to hear your feedback! Please let us know what you think of Reinforcement Learning in Azure Machine Learning and what features you are looking forward to."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "andress"
|
|
}
|
|
],
|
|
"categories": [
|
|
"how-to-use-azureml",
|
|
"reinforcement-learning"
|
|
],
|
|
"interpreter": {
|
|
"hash": "13382f70c1d0595120591d2e358c8d446daf961bf951d1fba9a32631e205d5ab"
|
|
},
|
|
"kernel_info": {
|
|
"name": "python38-azureml"
|
|
},
|
|
"kernelspec": {
|
|
"display_name": "Python 3.8 - AzureML",
|
|
"language": "python",
|
|
"name": "python38-azureml"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.7.9"
|
|
},
|
|
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00c3\u0192\u00c2\u00a2\u00c3\u00a2\u00e2\u20ac\u0161\u00c2\u00ac\u00c3\u201a\u00c2\u00afLicensed under the MIT License.\u00c3\u0192\u00c2\u00a2\u00c3\u00a2\u00e2\u20ac\u0161\u00c2\u00ac\u00c3\u201a\u00c2\u00af ",
|
|
"nteract": {
|
|
"version": "nteract-front-end@1.0.0"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 0
|
|
} |