mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-19 17:17:04 -05:00
558 lines
18 KiB
Plaintext
558 lines
18 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
|
"\n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Reinforcement Learning in Azure Machine Learning - Pong problem\n",
|
|
"Reinforcement Learning in Azure Machine Learning is a managed service for running distributed reinforcement learning training and simulation using the open source Ray framework.\n",
|
|
"This noteboook demonstrates how to use Ray to solve a more complex problem using a more complex setup including Ray RLLib running on multiple compute nodes and using a GPU.\n",
|
|
"For this example we will train a Pong playing agent on cluster of two NC6 nodes (6 CPU, 1 GPU).\n",
|
|
"\n",
|
|
"## Pong problem\n",
|
|
"[Pong](https://en.wikipedia.org/wiki/Pong) is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"<table style=\"width:50%\">\n",
|
|
" <tr>\n",
|
|
" <th style=\"text-align: center;\"><img src=\"./images/pong.gif\" alt=\"Pong image\" align=\"middle\" margin-left=\"auto\" margin-right=\"auto\"/></th>\n",
|
|
" </tr>\n",
|
|
" <tr style=\"text-align: center;\">\n",
|
|
" <th>Fig 1. Pong game animation (from <a href=\"https://towardsdatascience.com/intro-to-reinforcement-learning-pong-92a94aa0f84d\">towardsdatascience.com</a>).</th>\n",
|
|
" </tr>\n",
|
|
"</table>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The goal here is to train an agent to win an episode of Pong game against opponent with the score of at least 10 points. An episode in Pong runs until one of the players reaches a score of 21. Episodes are a terminology that is used across all the [OpenAI gym](https://www.gymlibrary.dev/environments/atari/pong/) environments that contains a strictly defined task.\n",
|
|
"\n",
|
|
"Training a Pong agent is a compute-intensive task and this example demonstrates the use of Reinforcement Learning in Azure Machine Learning service to train an agent faster in a distributed, parallel environment. You'll learn more about using the head and the worker compute targets to train an agent in this notebook below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Prerequisite\n",
|
|
"\n",
|
|
"It is highly recommended that the user should go through the [Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb) to understand the basics of Reinforcement Learning in Azure Machine Learning and Ray RLlib used in this notebook."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Azure Machine Learning SDK\n",
|
|
"Display the Azure Machine Learning SDK version."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683263371795
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"%matplotlib inline\n",
|
|
"\n",
|
|
"# Azure Machine Learning core imports\n",
|
|
"import azureml.core\n",
|
|
"\n",
|
|
"# Check core SDK version number\n",
|
|
"print(\"Azure Machine Learning SDK version: \", azureml.core.VERSION)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Get Azure Machine Learning workspace\n",
|
|
"Get a reference to an existing Azure Machine Learning workspace."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683263375690
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Workspace\n",
|
|
"\n",
|
|
"ws = Workspace.from_config()\n",
|
|
"print(ws.name, ws.location, ws.resource_group, sep = ' | ')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Create Azure Machine Learning experiment\n",
|
|
"Create an experiment to track the runs in your workspace."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683263378789
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.experiment import Experiment\n",
|
|
"\n",
|
|
"# Experiment name\n",
|
|
"experiment_name = 'rllib-pong-multi-node'\n",
|
|
"exp = Experiment(workspace=ws, name=experiment_name)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Create compute target\n",
|
|
"\n",
|
|
"In this example, we show how to set up a compute target for the Ray nodes.\n",
|
|
"\n",
|
|
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683263385677
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
|
|
"\n",
|
|
"# Choose a name for the Ray cluster\n",
|
|
"compute_name = 'compute-gpu'\n",
|
|
"compute_min_nodes = 0\n",
|
|
"compute_max_nodes = 2\n",
|
|
"\n",
|
|
"# This example uses GPU VM.\n",
|
|
"vm_size = 'STANDARD_NC6'\n",
|
|
"\n",
|
|
"if compute_name in ws.compute_targets:\n",
|
|
" compute_target = ws.compute_targets[compute_name]\n",
|
|
" if compute_target and type(compute_target) is AmlCompute:\n",
|
|
" if compute_target.provisioning_state == 'Succeeded':\n",
|
|
" print('found compute target. just use it', compute_name)\n",
|
|
" else: \n",
|
|
" raise Exception(\n",
|
|
" 'found compute target but it is in state', compute_target.provisioning_state)\n",
|
|
"else:\n",
|
|
" print('creating a new compute target...')\n",
|
|
" provisioning_config = AmlCompute.provisioning_configuration(\n",
|
|
" vm_size=vm_size,\n",
|
|
" min_nodes=compute_min_nodes, \n",
|
|
" max_nodes=compute_max_nodes,\n",
|
|
" )\n",
|
|
"\n",
|
|
" # Create the cluster\n",
|
|
" compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n",
|
|
" \n",
|
|
" # Can poll for a minimum number of nodes and for a specific timeout. \n",
|
|
" # If no min node count is provided it will use the scale settings for the cluster\n",
|
|
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
|
" \n",
|
|
" # For a more detailed view of current AmlCompute status, use get_status()\n",
|
|
" print(compute_target.get_status().serialize())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"nteract": {
|
|
"transient": {
|
|
"deleting": false
|
|
}
|
|
}
|
|
},
|
|
"source": [
|
|
"### Create Azure ML Environment\r\n",
|
|
"\r\n",
|
|
"This step creates and registers an Azure ML Environment that includes all of the dependencies needed to run this example, including CUDA drivers Pytorch, RLLib, and associated tools. This step can take a significant time (30 min) on the first run."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683263388781
|
|
},
|
|
"jupyter": {
|
|
"outputs_hidden": false,
|
|
"source_hidden": false
|
|
},
|
|
"nteract": {
|
|
"transient": {
|
|
"deleting": false
|
|
}
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"ray_environment_name = 'pong-gpu'"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683056774047
|
|
},
|
|
"jupyter": {
|
|
"outputs_hidden": false,
|
|
"source_hidden": false
|
|
},
|
|
"nteract": {
|
|
"transient": {
|
|
"deleting": false
|
|
}
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"from azureml.core import Environment\n",
|
|
"\n",
|
|
"ray_environment_dockerfile_path = os.path.join(os.getcwd(), 'docker', 'Dockerfile-gpu')\n",
|
|
"\n",
|
|
"# Build GPU image\n",
|
|
"ray_gpu_env = Environment. \\\n",
|
|
" from_dockerfile(name=ray_environment_name, dockerfile=ray_environment_dockerfile_path). \\\n",
|
|
" register(workspace=ws)\n",
|
|
"ray_gpu_build_details = ray_gpu_env.build(workspace=ws)\n",
|
|
"\n",
|
|
"ray_gpu_build_details.wait_for_completion(show_output=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Create reinforcement learning training run\n",
|
|
"\n",
|
|
"The code below submits the training run using a `ScriptRunConfig`. By providing the\n",
|
|
"command to run the training, and a `RunConfig` object configured with your\n",
|
|
"compute target, number of nodes, and environment image to use."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683264835679
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import RunConfiguration, ScriptRunConfig\n",
|
|
"\n",
|
|
"experiment_name = 'rllib-pong-multi-node'\n",
|
|
"\n",
|
|
"experiment = Experiment(workspace=ws, name=experiment_name)\n",
|
|
"ray_environment = Environment.get(workspace=ws, name=ray_environment_name)\n",
|
|
"\n",
|
|
"aml_run_config_ml = RunConfiguration(communicator='OpenMpi')\n",
|
|
"aml_run_config_ml.target = compute_target\n",
|
|
"aml_run_config_ml.node_count = 2\n",
|
|
"aml_run_config_ml.environment = ray_environment\n",
|
|
"\n",
|
|
"script_name='pong_rllib.py'\n",
|
|
"config_name='pong-impala-vectorized.yaml'\n",
|
|
"\n",
|
|
"command=[\n",
|
|
" 'python', script_name,\n",
|
|
" '--config', config_name\n",
|
|
"]\n",
|
|
"\n",
|
|
"config = ScriptRunConfig(source_directory='./files',\n",
|
|
" command=command,\n",
|
|
" run_config = aml_run_config_ml\n",
|
|
" )\n",
|
|
"training_run = experiment.submit(config)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Training configuration\n",
|
|
"All training parameters (including the Reinforcement Learning algorithm) are set through a single configuration file. For this example we'll be using the IMPALA algorithm to train an agent to play Atari Pong.\n",
|
|
"We set `num_workers` to 11 because we have 11 CPUs available for worker nodes (6 CPUs on each of 2 machines, with 1 CPU consumed as a head node).\n",
|
|
"We set `episode_reward_mean` (under `stop`) to 10 so that we terminate the run once we achieve a reward score of 10.\n",
|
|
"\n",
|
|
"Here is the configuration we are using for this example:\n",
|
|
"\n",
|
|
"```yaml\n",
|
|
"pong:\n",
|
|
" env: ALE/Pong-v5\n",
|
|
" run: IMPALA\n",
|
|
" config:\n",
|
|
" num_workers: 11\n",
|
|
" num_gpus: 1\n",
|
|
" rollout_fragment_length: 50\n",
|
|
" train_batch_size: 1000\n",
|
|
" num_sgd_iter: 2\n",
|
|
" num_multi_gpu_tower_stacks: 2\n",
|
|
" env_config:\n",
|
|
" frameskip: 1\n",
|
|
" full_action_space: false\n",
|
|
" repeat_action_probability: 0.0\n",
|
|
" stop:\n",
|
|
" episode_reward_mean: 10\n",
|
|
" total_time_s: 3600\n",
|
|
" model:\n",
|
|
" dim: 42\n",
|
|
"\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Monitor the run\n",
|
|
"\n",
|
|
"Azure Machine Learning provides a Jupyter widget to show the status of an experiment run. You could use this widget to monitor the status of the runs. The widget shows the list of two child runs, one for head compute target run and one for worker compute target run. You can click on the link under **Status** to see the details of the child run. It will also show the metrics being logged."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683056781459
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.widgets import RunDetails\n",
|
|
"\n",
|
|
"RunDetails(training_run).show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Stop the run\n",
|
|
"\n",
|
|
"To stop the run, call `training_run.cancel()`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683056781759
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Uncomment line below to cancel the run\n",
|
|
"# training_run.cancel()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Wait for completion\n",
|
|
"Wait for the run to complete before proceeding. If you want to stop the run, you may skip this and move to next section below. \n",
|
|
"\n",
|
|
"**Note: The run may take anywhere from 30 minutes to 45 minutes to complete.**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1682525323059
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"training_run.wait_for_completion()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Performance of the agent during training\n",
|
|
"\n",
|
|
"Let's get the reward metrics for the training run agent and observe how the agent's rewards improved over the training iterations and how the agent learns to win the Pong game. \n",
|
|
"\n",
|
|
"Collect the episode reward metrics from the worker run's metrics. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1683064583273
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Get the reward metrics from training_run\n",
|
|
"episode_reward_mean = training_run.get_metrics(name='episode_reward_mean')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Plot the reward metrics. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1682445012908
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"import matplotlib.pyplot as plt\n",
|
|
"\n",
|
|
"plt.plot(episode_reward_mean['episode_reward_mean'])\n",
|
|
"plt.xlabel('training_iteration')\n",
|
|
"plt.ylabel('episode_reward_mean')\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We observe that during the training over multiple episodes, the agent learns to win the Pong game against opponent with our target of 10 points in each episode of 21 points.\n",
|
|
"**Congratulations!! You have trained your Pong agent to win a game.**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Cleaning up\n",
|
|
"For your convenience, below you can find code snippets to clean up any resources created as part of this tutorial that you don't wish to retain."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"gather": {
|
|
"logged": 1682445012927
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# To archive the created experiment:\n",
|
|
"#experiment.archive()\n",
|
|
"\n",
|
|
"# To delete the compute targets:\n",
|
|
"#head_compute_target.delete()\n",
|
|
"#worker_compute_target.delete()"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "vineetg"
|
|
}
|
|
],
|
|
"categories": [
|
|
"how-to-use-azureml",
|
|
"reinforcement-learning"
|
|
],
|
|
"kernel_info": {
|
|
"name": "python38-azureml"
|
|
},
|
|
"kernelspec": {
|
|
"display_name": "Python 3.8 - AzureML",
|
|
"language": "python",
|
|
"name": "python38-azureml"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.8.5"
|
|
},
|
|
"microsoft": {
|
|
"host": {
|
|
"AzureML": {
|
|
"notebookHasBeenCompleted": true
|
|
}
|
|
},
|
|
"ms_spell_check": {
|
|
"ms_spell_check_language": "en"
|
|
}
|
|
},
|
|
"notice": "Copyright (c) Microsoft Corporation. All rights reserved.\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00afLicensed under the MIT License.\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00af ",
|
|
"nteract": {
|
|
"version": "nteract-front-end@1.0.0"
|
|
},
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "00c28698cbad9eaca051e9759b1181630e646922505b47b4c6352eb5aa72ddfc"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 0
|
|
} |