mirror of
https://github.com/Azure/MachineLearningNotebooks.git
synced 2025-12-19 17:17:04 -05:00
update samples from Release-69 as a part of SDK release
This commit is contained in:
@@ -16,6 +16,7 @@ The following tutorials are intended to provide an introductory overview of Azur
|
||||
|
||||
| Tutorial | Description | Notebook | Task | Framework |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| [Get Started (day1)](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup-local) | Learn the fundamental concepts of Azure Machine Learning to help onboard your existing code to Azure Machine Learning. This tutorial focuses heavily on submitting machine learning jobs to scalable cloud-based compute clusters. | [get-started-day1](get-started-day1/day1-part1-setup.ipynb) | Learn Azure Machine Learning Concepts | PyTorch
|
||||
| [Train your first ML Model](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-train) | Learn the foundational design patterns in Azure Machine Learning and train a scikit-learn model based on a diabetes data set. | [tutorial-quickstart-train-model.ipynb](create-first-ml-experiment/tutorial-1st-experiment-sdk-train.ipynb) | Regression | Scikit-Learn
|
||||
| [Train an image classification model](https://docs.microsoft.com/azure/machine-learning/tutorial-train-models-with-aml) | Train a scikit-learn image classification model. | [img-classification-part1-training.ipynb](image-classification-mnist-data/img-classification-part1-training.ipynb) | Image Classification | Scikit-Learn
|
||||
| [Deploy an image classification model](https://docs.microsoft.com/azure/machine-learning/tutorial-deploy-models-with-aml) | Deploy a scikit-learn image classification model to Azure Container Instances. | [img-classification-part2-deploy.ipynb](image-classification-mnist-data/img-classification-part2-deploy.ipynb) | Image Classification | Scikit-Learn
|
||||
|
||||
@@ -272,7 +272,7 @@
|
||||
"For this task, you submit the job to run on the remote training cluster you set up earlier. To submit a job you:\n",
|
||||
"* Create a directory\n",
|
||||
"* Create a training script\n",
|
||||
"* Create an estimator object\n",
|
||||
"* Create a script run configuration\n",
|
||||
"* Submit the job \n",
|
||||
"\n",
|
||||
"### Create a directory\n",
|
||||
@@ -400,16 +400,15 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create an estimator\n",
|
||||
"### Configure the training job\n",
|
||||
"\n",
|
||||
"An estimator object is used to submit the run. Azure Machine Learning has pre-configured estimators for common machine learning frameworks, as well as generic Estimator. Create an estimator by specifying\n",
|
||||
"Create a ScriptRunConfig object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Configure the ScriptRunConfig by specifying:\n",
|
||||
"\n",
|
||||
"* The name of the estimator object, `est`\n",
|
||||
"* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution. \n",
|
||||
"* The compute target. In this case you will use the AmlCompute you created\n",
|
||||
"* The training script name, train.py\n",
|
||||
"* An environment that contains the libraries needed to run the script\n",
|
||||
"* Parameters required from the training script. \n",
|
||||
"* Arguments required from the training script. \n",
|
||||
"\n",
|
||||
"In this tutorial, the target is AmlCompute. All files in the script folder are uploaded into the cluster nodes for execution. The data_folder is set to use the dataset.\n",
|
||||
"\n",
|
||||
@@ -441,7 +440,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then, create the estimator by specifying the training script, compute target and environment."
|
||||
"Then, create the ScriptRunConfig by specifying the training script, compute target and environment."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -454,19 +453,15 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.train.estimator import Estimator\n",
|
||||
"from azureml.core import ScriptRunConfig\n",
|
||||
"\n",
|
||||
"script_params = {\n",
|
||||
" # to mount files referenced by mnist dataset\n",
|
||||
" '--data-folder': mnist_file_dataset.as_named_input('mnist_opendataset').as_mount(),\n",
|
||||
" '--regularization': 0.5\n",
|
||||
"}\n",
|
||||
"args = ['--data-folder', mnist_file_dataset.as_mount(), '--regularization', 0.5]\n",
|
||||
"\n",
|
||||
"est = Estimator(source_directory=script_folder,\n",
|
||||
" script_params=script_params,\n",
|
||||
" compute_target=compute_target,\n",
|
||||
" environment_definition=env,\n",
|
||||
" entry_script='train.py')"
|
||||
"src = ScriptRunConfig(source_directory=script_folder,\n",
|
||||
" script='train.py', \n",
|
||||
" arguments=args,\n",
|
||||
" compute_target=compute_target,\n",
|
||||
" environment=env)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -475,7 +470,7 @@
|
||||
"source": [
|
||||
"### Submit the job to the cluster\n",
|
||||
"\n",
|
||||
"Run the experiment by submitting the estimator object. And you can navigate to Azure portal to monitor the run."
|
||||
"Run the experiment by submitting the ScriptRunConfig object. And you can navigate to Azure portal to monitor the run."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -490,7 +485,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"run = exp.submit(config=est)\n",
|
||||
"run = exp.submit(config=src)\n",
|
||||
"run"
|
||||
]
|
||||
},
|
||||
@@ -502,11 +497,11 @@
|
||||
"\n",
|
||||
"## Monitor a remote run\n",
|
||||
"\n",
|
||||
"In total, the first run takes **approximately 10 minutes**. But for subsequent runs, as long as the dependencies (`conda_packages` parameter in the above estimator constructor) don't change, the same image is reused and hence the container start up time is much faster.\n",
|
||||
"In total, the first run takes **approximately 10 minutes**. But for subsequent runs, as long as the dependencies in the Azure ML environment don't change, the same image is reused and hence the container start up time is much faster.\n",
|
||||
"\n",
|
||||
"Here is what's happening while you wait:\n",
|
||||
"\n",
|
||||
"- **Image creation**: A Docker image is created matching the Python environment specified by the estimator. The image is built and stored in the ACR (Azure Container Registry) associated with your workspace. Image creation and uploading takes **about 5 minutes**. \n",
|
||||
"- **Image creation**: A Docker image is created matching the Python environment specified by the Azure ML environment. The image is built and stored in the ACR (Azure Container Registry) associated with your workspace. Image creation and uploading takes **about 5 minutes**. \n",
|
||||
"\n",
|
||||
" This stage happens once for each Python environment since the container is cached for subsequent runs. During image creation, logs are streamed to the run history. You can monitor the image creation progress using these logs.\n",
|
||||
"\n",
|
||||
@@ -687,7 +682,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.6"
|
||||
"version": "3.6.9"
|
||||
},
|
||||
"msauthor": "roastala",
|
||||
"network_required": false
|
||||
|
||||
Reference in New Issue
Block a user