{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Copyright (c) Microsoft Corporation. All rights reserved.\n", "\n", "Licensed under the MIT License." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/reinforcement-learning/cartpole-on-compute-instance/cartpole_ci.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Compute Instance\n", "\n", "Reinforcement Learning in Azure Machine Learning is a managed service for running reinforcement learning training and simulation. With Reinforcement Learning in Azure Machine Learning, data scientists can start developing reinforcement learning systems on one machine, and scale to compute targets with 100s of nodes if needed.\n", "\n", "This example shows how to use Reinforcement Learning in Azure Machine Learning to train a Cartpole playing agent on a compute instance." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Cartpole problem\n", "\n", "Cartpole, also known as [Inverted Pendulum](https://en.wikipedia.org/wiki/Inverted_pendulum), is a pendulum with a center of mass above its pivot point. This formation is essentially unstable and will easily fall over but can be kept balanced by applying appropriate horizontal forces to the pivot point.\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", "
\n", " \"Cartpole \n", "

Fig 1. Cartpole problem schematic description (from towardsdatascience.com).

\n", "\n", "The goal here is to train an agent to keep the cartpole balanced by applying appropriate forces to the pivot point.\n", "\n", "See [this video](https://www.youtube.com/watch?v=XiigTGKZfks) for a real-world demonstration of cartpole problem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Prerequisite\n", "The user should have completed the Azure Machine Learning Tutorial: [Get started creating your first ML experiment with the Python SDK](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup). You will need to make sure that you have a valid subscription ID, a resource group, and an Azure Machine Learning workspace. All datastores and datasets you use should be associated with your workspace." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up Development Environment\n", "The following subsections show typical steps to setup your development environment. Setup includes:\n", "\n", "* Connecting to a workspace to enable communication between your local machine and remote resources\n", "* Creating an experiment to track all your runs\n", "* Using a Compute Instance as compute target" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Azure Machine Learning SDK \n", "Display the Azure Machine Learning SDK version." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683062935076 } }, "outputs": [], "source": [ "import azureml.core\n", "print(\"Azure Machine Learning SDK version:\", azureml.core.VERSION)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Get Azure Machine Learning workspace\n", "Get a reference to an existing Azure Machine Learning workspace." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683062936280 } }, "outputs": [], "source": [ "from azureml.core import Workspace\n", "\n", "ws = Workspace.from_config()\n", "print(ws.name, ws.location, ws.resource_group, sep = ' | ')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Use Compute Instance as compute target\n", "\n", "A compute target is a designated compute resource where you run your training and simulation scripts. This location may be your local machine or a cloud-based compute resource. For more information see [What are compute targets in Azure Machine Learning?](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target)\n", "\n", "The code below shows how to use current compute instance as a compute target. First some helper functions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683062936485 } }, "outputs": [], "source": [ "import os.path\n", "\n", "# Get information about the currently running compute instance (notebook VM), like its name and prefix.\n", "def load_nbvm():\n", " if not os.path.isfile(\"/mnt/azmnt/.nbvm\"):\n", " return None\n", " with open(\"/mnt/azmnt/.nbvm\", 'r') as nbvm_file:\n", " return { key:value for (key, value) in [ line.strip().split('=') for line in nbvm_file if '=' in line ] }\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we use these helper functions to get a handle to current compute instance." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683062937126 } }, "outputs": [], "source": [ "from azureml.core.compute import ComputeInstance\n", "from azureml.core.compute_target import ComputeTargetException\n", "\n", "import random\n", "import string\n", "\n", "# Load current compute instance info\n", "current_compute_instance = load_nbvm()\n", "\n", "# For this demo, let's use the current compute instance as the compute target, if available\n", "if current_compute_instance:\n", " print(\"Current compute instance:\", current_compute_instance)\n", " instance_name = current_compute_instance['instance']\n", "else:\n", " # Compute instance name needs to be unique across all existing compute instances within an Azure region\n", " instance_name = \"cartpole-ci-\" + \"\".join(random.choice(string.ascii_lowercase) for _ in range(5))\n", " try:\n", " instance = ComputeInstance(workspace=ws, name=instance_name)\n", " print('Found existing instance, use it.')\n", " except ComputeTargetException:\n", " print(\"Creating new compute instance...\")\n", " compute_config = ComputeInstance.provisioning_configuration(\n", " vm_size='STANDARD_D2_V2'\n", " )\n", " instance = ComputeInstance.create(ws, instance_name, compute_config)\n", " instance.wait_for_completion(show_output=True)\n", " print(\"Instance name:\", instance_name)\n", "\n", "compute_target = ws.compute_targets[instance_name]\n", "\n", "print(\"Compute target status:\")\n", "print(compute_target.get_status().serialize())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create Azure Machine Learning experiment\n", "Create an experiment to track the runs in your workspace. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683062937499 } }, "outputs": [], "source": [ "from azureml.core.experiment import Experiment\n", "\n", "experiment_name = 'CartPole-v1-CI'\n", "experiment = Experiment(workspace=ws, name=experiment_name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064044718 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "from azureml.core import Environment\n", "import os\n", "import time\n", "\n", "ray_environment_name = 'cartpole-ray-ci'\n", "ray_environment_dockerfile_path = os.path.join(os.getcwd(), 'files', 'docker', 'Dockerfile')\n", "\n", "# Build environment image\n", "ray_environment = Environment. \\\n", " from_dockerfile(name=ray_environment_name, dockerfile=ray_environment_dockerfile_path). \\\n", " register(workspace=ws)\n", "ray_env_build_details = ray_environment.build(workspace=ws)\n", "\n", "ray_env_build_details.wait_for_completion(show_output=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Train Cartpole Agent\n", "In this section, we show how to use Azure Machine Learning jobs and Ray/RLlib framework to train a cartpole playing agent. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create reinforcement learning training run\n", "\n", "The code below submits the training run using a `ScriptRunConfig`. By providing the\n", "command to run the training, and a `RunConfig` object configured with your\n", "compute target, number of nodes, and environment image to use." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064046594 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "from azureml.core import Environment\n", "from azureml.core import RunConfiguration, ScriptRunConfig, Experiment\n", "from azureml.core.runconfig import DockerConfiguration, RunConfiguration\n", "\n", "config_name = 'cartpole-ppo.yaml'\n", "script_name = 'cartpole_training.py'\n", "script_arguments = [\n", " '--config', config_name\n", "]\n", "\n", "aml_run_config_ml = RunConfiguration(communicator='OpenMpi')\n", "aml_run_config_ml.target = compute_target\n", "aml_run_config_ml.node_count = 1\n", "aml_run_config_ml.environment = ray_environment\n", "\n", "training_config = ScriptRunConfig(source_directory='./files',\n", " script=script_name,\n", " arguments=script_arguments,\n", " run_config = aml_run_config_ml\n", " )\n", "training_run = experiment.submit(training_config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training configuration\n", "\n", "This is the training configuration (in yaml) that we use to train an agent to solve the CartPole problem using\n", "the PPO algorithm.\n", "\n", "```yaml\n", "cartpole-ppo:\n", " env: CartPole-v1\n", " run: PPO\n", " stop:\n", " episode_reward_mean: 475\n", " time_total_s: 300\n", " checkpoint_config:\n", " checkpoint_frequency: 2\n", " checkpoint_at_end: true\n", " config:\n", " # Works for both torch and tf.\n", " framework: torch\n", " gamma: 0.99\n", " lr: 0.0003\n", " num_workers: 1\n", " observation_filter: MeanStdFilter\n", " num_sgd_iter: 6\n", " vf_loss_coeff: 0.01\n", " model:\n", " fcnet_hiddens: [32]\n", " fcnet_activation: linear\n", " vf_share_layers: true\n", " enable_connectors: true\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Monitor experiment\n", "Azure Machine Learning provides a Jupyter widget to show the status of an experiment run. You could use this widget to monitor the status of the runs.\n", "\n", "You can click on the link under **Status** to see the details of a child run. It will also show the metrics being logged." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064049813 } }, "outputs": [], "source": [ "from azureml.widgets import RunDetails\n", "\n", "RunDetails(training_run).show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Stop the run\n", "\n", "To stop the run, call `training_run.cancel()`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064050024 } }, "outputs": [], "source": [ "# Uncomment line below to cancel the run\n", "# training_run.cancel()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Wait for completion\n", "Wait for the run to complete before proceeding.\n", "\n", "**Note: The run may take a few minutes to complete.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064304728 } }, "outputs": [], "source": [ "training_run.wait_for_completion()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluate Trained Agent and See Results\n", "\n", "We can evaluate a previously trained policy using the `cartpole_rollout.py` helper script provided by RLlib (see [Evaluating Trained Policies](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies) for more details). Here we use an adaptation of this script to reconstruct a policy from a checkpoint taken and saved during training. We took these checkpoints by setting `checkpoint-freq` and `checkpoint-at-end` parameters above.\n", "\n", "In this section we show how to get access to these checkpoints data, and then how to use them to evaluate the trained policy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create a dataset of training artifacts\n", "To evaluate a trained policy (a checkpoint) we need to make the checkpoint accessible to the rollout script.\n", "We can use the Run API to download policy training artifacts (saved model and checkpoints) to local compute." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064305251 } }, "outputs": [], "source": [ "from os import path\n", "from distutils import dir_util\n", "\n", "training_artifacts_path = path.join(\"logs\", \"cartpole-ppo\")\n", "print(\"Training artifacts path:\", training_artifacts_path)\n", "\n", "if path.exists(training_artifacts_path):\n", " dir_util.remove_tree(training_artifacts_path)\n", "\n", "# Download run artifacts to local compute\n", "training_run.download_files(training_artifacts_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's find the checkpoints and the last checkpoint number." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064305283 } }, "outputs": [], "source": [ "# A helper function to find all of the checkpoint directories located within a larger directory tree\n", "def find_checkpoints(file_path):\n", " print(\"Looking in path:\", file_path)\n", " checkpoints = []\n", " for root, dirs, files in os.walk(file_path):\n", " trimmed_root = root[len(file_path)+1:]\n", " for name in dirs:\n", " if name.startswith('checkpoint_'):\n", " checkpoints.append(path.join(trimmed_root, name))\n", " return checkpoints" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064305305 } }, "outputs": [], "source": [ "# Find checkpoints and last checkpoint number\n", "checkpoint_files = find_checkpoints(training_artifacts_path)\n", "\n", "last_checkpoint_path = None\n", "last_checkpoint_number = -1\n", "for checkpoint_file in checkpoint_files:\n", " checkpoint_number = int(os.path.basename(checkpoint_file).split('_')[1])\n", " if checkpoint_number > last_checkpoint_number:\n", " last_checkpoint_path = checkpoint_file\n", " last_checkpoint_number = checkpoint_number\n", "\n", "print(\"Last checkpoint number:\", last_checkpoint_number)\n", "print(\"Last checkpoint path:\", last_checkpoint_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we upload checkpoints to default datastore and create a file dataset. This dataset will be used to pass in the checkpoints to the rollout script." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064305331 } }, "outputs": [], "source": [ "# Upload the checkpoint files and create a DataSet\n", "from azureml.data.dataset_factory import FileDatasetFactory\n", "\n", "datastore = ws.get_default_datastore()\n", "checkpoint_ds = FileDatasetFactory.upload_directory(training_artifacts_path, (datastore, 'cartpole_checkpoints_' + training_run.id), overwrite=False, show_progress=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To verify, we can print out the number (and paths) of all the files in the dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064305353 } }, "outputs": [], "source": [ "artifacts_paths = checkpoint_ds.to_path()\n", "print(\"Number of files in dataset:\", len(artifacts_paths))\n", "\n", "# Uncomment line below to print all file paths\n", "#print(\"Artifacts dataset file paths: \", artifacts_paths)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Evaluate Trained Agent and See Results\n", "\n", "We can evaluate a previously trained policy using the `cartpole_rollout.py` helper script provided by RLlib (see [Evaluating Trained Policies](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies) for more details). Here we use an adaptation of this script to reconstruct a policy from a checkpoint taken and saved during training. We took these checkpoints by setting `checkpoint-freq` and `checkpoint-at-end` parameters above.\n", "In this section we show how to use these checkpoints to evaluate the trained policy." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064305371 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "ray_environment_name = 'cartpole-ray-ci'\n", "\n", "experiment_name = 'CartPole-v1-CI'\n", "\n", "experiment = Experiment(workspace=ws, name=experiment_name)\n", "ray_environment = Environment.get(workspace=ws, name=ray_environment_name)\n", "\n", "script_name = 'cartpole_rollout.py'\n", "script_arguments = [\n", " '--steps', '2000',\n", " '--checkpoint', last_checkpoint_path,\n", " '--algo', 'PPO',\n", " '--render', 'false',\n", " '--dataset_path', checkpoint_ds.as_named_input('dataset_path').as_mount()\n", "]\n", "\n", "aml_run_config_ml = RunConfiguration(communicator='OpenMpi')\n", "aml_run_config_ml.target = compute_target\n", "aml_run_config_ml.node_count = 1\n", "aml_run_config_ml.environment = ray_environment\n", "aml_run_config_ml.data\n", "\n", "rollout_config = ScriptRunConfig(\n", " source_directory='./files',\n", " script=script_name,\n", " arguments=script_arguments,\n", " run_config = aml_run_config_ml\n", " )\n", " \n", "rollout_run = experiment.submit(rollout_config)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And then, similar to the training section, we can monitor the real-time progress of the rollout run and its chid as follows. If you browse logs of the child run you can see the evaluation results recorded in std_log_process_0.txt file. Note that you may need to wait several minutes before these results become available." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064305399 } }, "outputs": [], "source": [ "RunDetails(rollout_run).show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wait for completion of the rollout run, or you may cancel the run." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064305419 } }, "outputs": [], "source": [ "# Uncomment line below to cancel the run\n", "#rollout_run.cancel()\n", "rollout_run.wait_for_completion()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cleaning up\n", "For your convenience, below you can find code snippets to clean up any resources created as part of this tutorial that you don't wish to retain." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1683064305437 } }, "outputs": [], "source": [ "# To archive the created experiment:\n", "#exp.archive()\n", "\n", "# To delete created compute instance\n", "if not current_compute_instance:\n", " compute_target.delete()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Next\n", "This example was about running Reinforcement Learning in Azure Machine Learning (Ray/RLlib Framework) on a compute instance. Please see [Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb)\n", "example which uses Ray RLlib to train a Cartpole playing agent on a single node remote compute.\n" ] } ], "metadata": { "authors": [ { "name": "adrosa" }, { "name": "hoazari" } ], "categories": [ "how-to-use-azureml", "reinforcement-learning" ], "kernel_info": { "name": "python38-azureml" }, "kernelspec": { "display_name": "Python 3.8 - AzureML", "language": "python", "name": "python38-azureml" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" }, "microsoft": { "host": { "AzureML": { "notebookHasBeenCompleted": true } }, "ms_spell_check": { "ms_spell_check_language": "en" } }, "notice": "Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.", "nteract": { "version": "nteract-front-end@1.0.0" }, "vscode": { "interpreter": { "hash": "00c28698cbad9eaca051e9759b1181630e646922505b47b4c6352eb5aa72ddfc" } } }, "nbformat": 4, "nbformat_minor": 0 }