mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-19 17:17:04 -05:00
437 lines
17 KiB
Plaintext
437 lines
17 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Tutorial: Learn how to use Datasets in Azure ML"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"In this tutorial, you learn how to use Azure ML Datasets to train a regression model with the Azure Machine Learning SDK for Python. You will\n",
|
|
"\n",
|
|
"* Explore and prepare data for training the model\n",
|
|
"* Register the Dataset in your workspace to share it with others\n",
|
|
"* Take snapshots of data to ensure models can be trained with the same data every time\n",
|
|
"* Create and use multiple Dataset definitions to ensure that updates to the definition don't break existing pipelines/scripts\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"In this tutorial, you:\n",
|
|
"\n",
|
|
"☑ Setup a Python environment and import packages\n",
|
|
"\n",
|
|
"☑ Load the Titanic data from your Azure Blob Storage. (The [original data](https://www.kaggle.com/c/titanic/data) can be found on Kaggle)\n",
|
|
"\n",
|
|
"☑ Explore and cleanse the data to remove anomalies\n",
|
|
"\n",
|
|
"☑ Register the Dataset in your workspace, allowing you to use it in model training \n",
|
|
"\n",
|
|
"☑ Take a Dataset snapshot for repeatability and train a model with the snapshot\n",
|
|
"\n",
|
|
"☑ Make changes to the dataset's definition without breaking the production model or the daily data pipeline"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Pre-requisites:\n",
|
|
"Skip to Set up your development environment to read through the notebook steps, or use the instructions below to get the notebook and run it on Azure Notebooks or your own notebook server. To run the notebook you will need:\n",
|
|
"\n",
|
|
"A Python 3.6 notebook server with the following installed:\n",
|
|
"* The Azure Machine Learning SDK for Python\n",
|
|
"* The Azure Machine Learning Data Prep SDK for Python\n",
|
|
"* The tutorial notebook\n",
|
|
"\n",
|
|
"Data and train.py script to store in your Azure Blob Storage Account.\n",
|
|
" * [Titanic data](./train-dataset/Titanic.csv)\n",
|
|
" * [train.py](./train-dataset/train.py)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"To create and register Datasets you need:\n",
|
|
"\n",
|
|
" * An Azure subscription. If you don\u00e2\u20ac\u2122t have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning service](https://aka.ms/AMLFree) today.\n",
|
|
"\n",
|
|
" * An Azure Machine Learning service workspace. See the [Create an Azure Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/setup-create-workspace?branch=release-build-amls).\n",
|
|
"\n",
|
|
" * The Azure Machine Learning SDK for Python (version 1.0.21 or later). To install or update to the latest version of the SDK, see [Install or update the SDK](https://docs.microsoft.com/python/api/overview/azure/ml/install?view=azure-ml-py).\n",
|
|
"\n",
|
|
"\n",
|
|
"For more information on how to set up your workspace, see the [Create an Azure Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/setup-create-workspace?branch=release-build-amls).\n",
|
|
"\n",
|
|
"The first part that needs to be done is setting up your python environment. You will need to import all of your python packages including `azureml.dataprep` and `azureml.core.dataset`. Then access your workspace through your Azure subscription and set up your compute target. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import azureml.dataprep as dprep\n",
|
|
"import azureml.core\n",
|
|
"import pandas as pd\n",
|
|
"import logging\n",
|
|
"import os\n",
|
|
"import shutil\n",
|
|
"from azureml.core import Workspace, Datastore, Dataset\n",
|
|
"\n",
|
|
"# Get existing workspace from config.json file in the same folder as the tutorial notebook\n",
|
|
"# You can download the config file from your workspace\n",
|
|
"workspace = Workspace.from_config()\n",
|
|
"print(\"Workspace\")\n",
|
|
"print(workspace)\n",
|
|
"print(\"Compute targets\")\n",
|
|
"print(workspace.compute_targets)\n",
|
|
"\n",
|
|
"# Get compute target that has already been attached to the workspace\n",
|
|
"# Pick the right compute target from the list of computes attached to your workspace\n",
|
|
"\n",
|
|
"compute_target_name = 'dataset-test'\n",
|
|
"remote_compute_target = workspace.compute_targets[compute_target_name]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"To load data to your dataset, you will access the data through your datastore. After you create your dataset, you can use `get_profile()` to see your data's statistics.\n",
|
|
"\n",
|
|
"We will now upload the [original data](https://www.kaggle.com/c/titanic/data) to the default datastore(blob) within your workspace.."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"datastore = workspace.get_default_datastore()\n",
|
|
"datastore.upload_files(files=['./train-dataset/Titanic.csv'],\n",
|
|
" target_path='train-dataset/',\n",
|
|
" overwrite=True,\n",
|
|
" show_progress=True)\n",
|
|
"\n",
|
|
"dataset = Dataset.auto_read_files(path=datastore.path('train-dataset/Titanic.csv'))\n",
|
|
"\n",
|
|
"#Display Dataset Profile of the Titanic Dataset\n",
|
|
"dataset.get_profile()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"To predict if a person survived the Titanic's sinking or not, the columns that are relevant to train the model are 'Survived','Pclass', 'Sex','SibSp', and 'Parch'. You can update your dataset's deinition and only keep these columns you will need. You will also need to convert values (\"male\",\"female\") in the \"Sex\" column to 0 or 1, because the algorithm in the train.py file will be using numeric values instead of strings.\n",
|
|
"\n",
|
|
"For more examples of preparing data with Datasets, see [Explore and prepare data with the Dataset class](aka.ms/azureml/howto/exploreandpreparedata)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"ds_def = dataset.get_definition()\n",
|
|
"ds_def = ds_def.keep_columns(['Survived','Pclass', 'Sex','SibSp', 'Parch', 'Fare'])\n",
|
|
"ds_def = ds_def.replace('Sex','male', 0)\n",
|
|
"ds_def = ds_def.replace('Sex','female', 1)\n",
|
|
"ds_def.head(5)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Once you have cleaned your data, you can register your dataset in your workspace. \n",
|
|
"\n",
|
|
"Registering your dataset allows you to easily have access to your processed data and share it with other people in your organization using the same workspace. It can be accessed in any notebook or script that is connected to your workspace."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"dataset = dataset.update_definition(ds_def, 'Cleaned Data')\n",
|
|
"\n",
|
|
"dataset.generate_profile(compute_target='local').get_result()\n",
|
|
"\n",
|
|
"dataset_name = 'clean_Titanic_tutorial'\n",
|
|
"dataset = dataset.register(workspace=workspace,\n",
|
|
" name=dataset_name,\n",
|
|
" description='training dataset',\n",
|
|
" tags = {'year':'2019', 'month':'Apr'},\n",
|
|
" exist_ok=True)\n",
|
|
"workspace.datasets"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can also take a snapshot of your dataset. This makes for easily reproducing your data as it is in that moment. Even if you changed the definition of your dataset, or have data that refreshes regularly, you can always go back to your snapshot to compare. Since this snapshot is being created on a compute in your workspace, it may take a signficant amount of time to provision the compute before running the action itself."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"print(dataset.get_all_snapshots())\n",
|
|
"snapshot_name = 'train_snapshot'\n",
|
|
"\n",
|
|
"print(\"Compute target status\")\n",
|
|
"print(remote_compute_target.get_status().provisioning_state)\n",
|
|
"\n",
|
|
"snapshot = dataset.create_snapshot(snapshot_name=snapshot_name, \n",
|
|
" compute_target=remote_compute_target, \n",
|
|
" create_data_snapshot=True)\n",
|
|
"snapshot.wait_for_completion()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now that you have registered your dataset and created a snapshot, you can call up the dataset and it's snapshot to use it in your train.py script.\n",
|
|
"\n",
|
|
"The following code snippit will train your model locally using the train.py script."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.core import Experiment, RunConfiguration\n",
|
|
"\n",
|
|
"experiment_name = 'training-datasets'\n",
|
|
"experiment = Experiment(workspace = workspace, name = experiment_name)\n",
|
|
"project_folder = './train-dataset/'\n",
|
|
"\n",
|
|
"# create a new RunConfig object\n",
|
|
"run_config = RunConfiguration()\n",
|
|
"\n",
|
|
"run_config.environment.python.user_managed_dependencies = True\n",
|
|
"\n",
|
|
"from azureml.core import Run\n",
|
|
"from azureml.core import ScriptRunConfig\n",
|
|
"\n",
|
|
"src = ScriptRunConfig(source_directory=project_folder, \n",
|
|
" script='train.py', \n",
|
|
" run_config=run_config) \n",
|
|
"run = experiment.submit(config=src)\n",
|
|
"run.wait_for_completion(show_output=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can also use the same script with your dataset snapshot for your Pipeline's Python Script Step.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from azureml.pipeline.core import Pipeline, PipelineData\n",
|
|
"from azureml.pipeline.steps import PythonScriptStep\n",
|
|
"from azureml.data.data_reference import DataReference\n",
|
|
"\n",
|
|
"trainStep = PythonScriptStep(script_name=\"train.py\",\n",
|
|
" compute_target=remote_compute_target,\n",
|
|
" source_directory=project_folder)\n",
|
|
"\n",
|
|
"pipeline = Pipeline(workspace=workspace,\n",
|
|
" steps=trainStep)\n",
|
|
"\n",
|
|
"pipeline_run = experiment.submit(pipeline)\n",
|
|
"pipeline_run.wait_for_completion()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"During any point of your workflow, you can get a previous snapshot of your dataset and use that version in your pipeline to quickly see how different versions of your data can effect your model."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"snapshot = dataset.get_snapshot(snapshot_name=snapshot_name)\n",
|
|
"snapshot.to_pandas_dataframe().head(5)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can make changes to the dataset's definition without breaking the production model or the daily data pipeline. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can call get_definitions to see that there are several versions. After each change to a dataset's version, another one is added."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"dataset.get_definitions()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"dataset = Dataset.get(workspace=workspace, name=dataset_name)\n",
|
|
"ds_def = dataset.get_definition()\n",
|
|
"ds_def = ds_def.drop_columns(['Fare'])\n",
|
|
"dataset = dataset.update_definition(ds_def, 'Dropping Fare as PClass and Fare are strongly correlated')\n",
|
|
"dataset.generate_profile(compute_target='local').get_result()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Dataset definitions can be deprecated when usage is no longer recommended and a replacement is available. When a deprecated dataset definition is used in an AML Experimentation/Pipeline scenario, a warning message gets returned but execution will not be blocked. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Deprecate dataset definition 1 by the 2nd definition\n",
|
|
"ds_def = dataset.get_definition('1')\n",
|
|
"ds_def.deprecate(deprecate_by_dataset_id=dataset._id, deprecated_by_definition_version='2')\n",
|
|
"dataset.get_definitions()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Dataset definitions can be archived when definitions are not supposed to be used for any reasons (such as underlying data no longer available). When an archived dataset definition is used in an AML Experimentation/Pipeline scenario, execution will be blocked with error. No further actions can be performed on archived Dataset definitions, but the references will be kept intact. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Archive the deprecated dataset definition #1\n",
|
|
"ds_def = dataset.get_definition('1')\n",
|
|
"ds_def.archive()\n",
|
|
"dataset.get_definitions()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can also reactivate any defition that you archived for later use."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"ds_def = dataset.get_definition('1')\n",
|
|
"ds_def.reactivate()\n",
|
|
"dataset.get_definitions()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now delete the current snapshot name to clean up your resource's space."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"dataset.delete_snapshot(snapshot_name)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"You have now finished using a dataset from start to finish of your experiment!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
""
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "cforbe"
|
|
}
|
|
],
|
|
"kernelspec": {
|
|
"display_name": "Python 3.6",
|
|
"language": "python",
|
|
"name": "python36"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.4"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
} |