mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-22 18:42:41 -05:00
update samples from Release-53 as a part of 1.19.0 SDK stable release
This commit is contained in:
@@ -19,8 +19,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# How to Setup a Schedule for a Published Pipeline\n",
|
||||
"In this notebook, we will show you how you can run an already published pipeline on a schedule."
|
||||
"# How to Setup a Schedule for a Published Pipeline or Pipeline Endpoint\n",
|
||||
"In this notebook, we will show you how you can run an already published pipeline or a pipeline endpoint on a schedule."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -159,6 +159,43 @@
|
||||
"print(\"Newly published pipeline id: {}\".format(published_pipeline1.id))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"nteract": {
|
||||
"transient": {
|
||||
"deleting": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Create a Pipeline Endpoint\n",
|
||||
"Alternatively, you can create a schedule to run a pipeline endpoint instead of a published pipeline. You will need this to create a schedule against a pipeline endpoint in the last section of this notebook. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"jupyter": {
|
||||
"outputs_hidden": false,
|
||||
"source_hidden": false
|
||||
},
|
||||
"nteract": {
|
||||
"transient": {
|
||||
"deleting": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core import PipelineEndpoint\n",
|
||||
"\n",
|
||||
"pipeline_endpoint = PipelineEndpoint.publish(workspace=ws, name=\"ScheduledPipelineEndpoint\",\n",
|
||||
" pipeline=pipeline1, description=\"Publish pipeline endpoint for schedule test\")\n",
|
||||
"pipeline_endpoint"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -196,14 +233,24 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create a schedule for the pipeline using a recurrence\n",
|
||||
"### Create a schedule for the published pipeline using a recurrence\n",
|
||||
"This schedule will run on a specified recurrence interval."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"jupyter": {
|
||||
"outputs_hidden": false,
|
||||
"source_hidden": false
|
||||
},
|
||||
"nteract": {
|
||||
"transient": {
|
||||
"deleting": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core.schedule import ScheduleRecurrence, Schedule\n",
|
||||
@@ -308,7 +355,11 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"gather": {
|
||||
"logged": 1606157800044
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
|
||||
@@ -410,7 +461,11 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"gather": {
|
||||
"logged": 1606157862620
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
|
||||
@@ -419,14 +474,151 @@
|
||||
"schedule = Schedule.get(ws, schedule_id)\n",
|
||||
"print(\"Disabled schedule {}. New status is: {}\".format(schedule.id, schedule.status))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"nteract": {
|
||||
"transient": {
|
||||
"deleting": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Create a schedule for a pipeline endpoint\n",
|
||||
"Alternative to creating schedules for a published pipeline, you can also create schedules to run pipeline endpoints.\n",
|
||||
"Retrieve the pipeline endpoint id to create a schedule. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"gather": {
|
||||
"logged": 1606157888851
|
||||
},
|
||||
"jupyter": {
|
||||
"outputs_hidden": false,
|
||||
"source_hidden": false
|
||||
},
|
||||
"nteract": {
|
||||
"transient": {
|
||||
"deleting": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline_endpoint_by_name = PipelineEndpoint.get(workspace=ws, name=\"ScheduledPipelineEndpoint\")\n",
|
||||
"published_pipeline_endpoint_id = pipeline_endpoint_by_name.id\n",
|
||||
"\n",
|
||||
"recurrence = ScheduleRecurrence(frequency=\"Day\", interval=2, hours=[22], minutes=[30]) # Runs every other day at 10:30pm\n",
|
||||
"\n",
|
||||
"schedule = Schedule.create_for_pipeline_endpoint(workspace=ws, name=\"My_Endpoint_Schedule\",\n",
|
||||
" pipeline_endpoint_id=published_pipeline_endpoint_id,\n",
|
||||
" experiment_name='Schedule_Run',\n",
|
||||
" recurrence=recurrence, description=\"Schedule_Run\",\n",
|
||||
" wait_for_provisioning=True)\n",
|
||||
"\n",
|
||||
"# You may want to make sure that the schedule is provisioned properly\n",
|
||||
"# before making any further changes to the schedule\n",
|
||||
"\n",
|
||||
"print(\"Created schedule with id: {}\".format(schedule.id))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"nteract": {
|
||||
"transient": {
|
||||
"deleting": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Get all schedules for a given pipeline endpoint\n",
|
||||
"Once you have the pipeline endpoint ID, then you can get all schedules for that pipeline endopint."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"jupyter": {
|
||||
"outputs_hidden": false,
|
||||
"source_hidden": false
|
||||
},
|
||||
"nteract": {
|
||||
"transient": {
|
||||
"deleting": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"schedules_for_pipeline_endpoints = Schedule.\\\n",
|
||||
" get_schedules_for_pipeline_endpoint_id(ws,\n",
|
||||
" pipeline_endpoint_id=published_pipeline_endpoint_id)\n",
|
||||
"print('Got all schedules for pipeline endpoint:', published_pipeline_endpoint_id, 'Count:',\n",
|
||||
" len(schedules_for_pipeline_endpoints))\n",
|
||||
"\n",
|
||||
"print('done')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"nteract": {
|
||||
"transient": {
|
||||
"deleting": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Disable the schedule created for running the pipeline endpont\n",
|
||||
"Recall the best practice of disabling schedules when not in use.\n",
|
||||
"The number of schedule triggers allowed per month per region per subscription is 100,000.\n",
|
||||
"This is calculated using the project trigger counts for all active schedules."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"jupyter": {
|
||||
"outputs_hidden": false,
|
||||
"source_hidden": false
|
||||
},
|
||||
"nteract": {
|
||||
"transient": {
|
||||
"deleting": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
|
||||
"print(\"Using schedule with id: {}\".format(fetched_schedule.id))\n",
|
||||
"\n",
|
||||
"# Set the wait_for_provisioning flag to False if you do not want to wait \n",
|
||||
"# for the call to provision the schedule in the backend.\n",
|
||||
"fetched_schedule.disable(wait_for_provisioning=True)\n",
|
||||
"fetched_schedule = Schedule.get(ws, schedule_id)\n",
|
||||
"print(\"Disabled schedule {}. New status is: {}\".format(fetched_schedule.id, fetched_schedule.status))"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "sanpil"
|
||||
"name": "shbijlan"
|
||||
}
|
||||
],
|
||||
"categories": [
|
||||
"how-to-use-azureml",
|
||||
"machine-learning-pipelines",
|
||||
"intro-to-pipelines"
|
||||
],
|
||||
"category": "tutorial",
|
||||
"compute": [
|
||||
"AML Compute"
|
||||
@@ -441,7 +633,7 @@
|
||||
"framework": [
|
||||
"Azure ML"
|
||||
],
|
||||
"friendly_name": "How to Setup a Schedule for a Published Pipeline",
|
||||
"friendly_name": "How to Setup a Schedule for a Published Pipeline or Pipeline Endpoint",
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6",
|
||||
"language": "python",
|
||||
@@ -459,6 +651,9 @@
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.7"
|
||||
},
|
||||
"nteract": {
|
||||
"version": "nteract-front-end@1.0.0"
|
||||
},
|
||||
"order_index": 10,
|
||||
"star_tag": [
|
||||
"featured"
|
||||
@@ -466,7 +661,7 @@
|
||||
"tags": [
|
||||
"None"
|
||||
],
|
||||
"task": "Demonstrates the use of Schedules for Published Pipelines"
|
||||
"task": "Demonstrates the use of Schedules for Published Pipelines and Pipeline endpoints"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
|
||||
@@ -30,7 +30,7 @@
|
||||
"## Introduction\n",
|
||||
"In this example we showcase how you can use AzureML Dataset to load data for AutoML via AML Pipeline. \n",
|
||||
"\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you have executed the [configuration](https://aka.ms/pl-config) before running this notebook.\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you have executed the [configuration](https://aka.ms/pl-config) before running this notebook, please also take a look at the [Automated ML setup-using-a-local-conda-environment](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning#setup-using-a-local-conda-environment) section to setup the environment.\n",
|
||||
"\n",
|
||||
"In this notebook you will learn how to:\n",
|
||||
"1. Create an `Experiment` in an existing `Workspace`.\n",
|
||||
|
||||
@@ -2,7 +2,3 @@ name: aml-pipelines-with-automated-machine-learning-step
|
||||
dependencies:
|
||||
- pip:
|
||||
- azureml-sdk
|
||||
- azureml-train-automl
|
||||
- azureml-widgets
|
||||
- matplotlib
|
||||
- pandas_ml
|
||||
|
||||
@@ -284,7 +284,7 @@
|
||||
"# Specify CondaDependencies obj, add necessary packages\n",
|
||||
"aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(\n",
|
||||
" conda_packages=['pandas','scikit-learn'], \n",
|
||||
" pip_packages=['azureml-sdk[automl,explain]', 'pyarrow'])\n",
|
||||
" pip_packages=['azureml-sdk[automl]', 'pyarrow'])\n",
|
||||
"\n",
|
||||
"print (\"Run configuration created.\")"
|
||||
]
|
||||
|
||||
Reference in New Issue
Block a user