Table of Contents
- Automated ML Introduction
- Running samples in Azure Notebooks
- Running samples in Azure Databricks
- Running samples in a Local Conda environment
- Automated ML SDK Sample Notebooks
- Documentation
- Running using python command
- Troubleshooting
Automated ML introduction
Automated machine learning (automated ML) builds high quality machine learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, automated ML will give you a high quality machine learning model that you can use for predictions.
If you are new to Data Science, AutoML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
Below are the three execution environments supported by AutoML.
Running samples in Azure Notebooks - Jupyter based notebooks in the Azure cloud
Import sample notebooks into Azure Notebooks.- Follow the instructions in the configuration notebook to create and connect to a workspace.
- Open one of the sample notebooks.
Running samples in Azure Databricks
NOTE: Please create your Azure Databricks cluster as v4.x (high concurrency preferred) with Python 3 (dropdown). NOTE: You should at least have contributor access to your Azure subcription to run the notebook.
- Please remove the previous SDK version if there is any and install the latest SDK by installing azureml-sdk[automl_databricks] as a PyPi library in Azure Databricks workspace.
- Download the sample notebook 16a.auto-ml-classification-local-azuredatabricks from GitHub and import into the Azure databricks workspace.
- Attach the notebook to the cluster.
Running samples in a Local Conda environment
To run these notebook on your own notebook server, use these installation instructions.
The instructions below will install everything you need and then start a Jupyter notebook. To start your Jupyter notebook manually, use:
conda activate azure_automl
jupyter notebook
or on Mac:
source activate azure_automl
jupyter notebook
1. Install mini-conda from here, choose Python 3.7 or higher.
- Note: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda. There's no need to install mini-conda specifically.
2. Downloading the sample notebooks
- Download the sample notebooks from GitHub as zip and extract the contents to a local directory. The AutoML sample notebooks are in the "automl" folder.
3. Setup a new conda environment
The automl/automl_setup script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook. It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. See the specific sections below for Windows, Mac and Linux. It can take about 10 minutes to execute.
Windows
Start an Anaconda Prompt window, cd to the automl folder where the sample notebooks were extracted and then run:
automl_setup
Mac
Install "Command line developer tools" if it is not already installed (you can use the command: xcode-select --install).
Start a Terminal windows, cd to the automl folder where the sample notebooks were extracted and then run:
bash automl_setup_mac.sh
Linux
cd to the automl folder where the sample notebooks were extracted and then run:
bash automl_setup_linux.sh
4. Running configuration.ipynb
- Before running any samples you next need to run the configuration notebook. Click on configuration.ipynb notebook
- Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (instructions in notebook)
5. Running Samples
- Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks.
- Follow the instructions in the individual notebooks to explore various features in AutoML
Automated ML SDK Sample Notebooks
-
- Create new Azure ML Workspace
- Save Workspace configuration file
-
- Dataset: scikit learn's digit dataset
- Simple example of using Auto ML for classification
- Uses local compute for training
-
- Dataset: scikit learn's diabetes dataset
- Simple example of using Auto ML for regression
- Uses local compute for training
-
auto-ml-remote-execution.ipynb
- Dataset: scikit learn's digit dataset
- Example of using Auto ML for classification using a remote linux DSVM for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
-
- Dataset: scikit learn's digit dataset
- Example of using automated ML for classification using a remote Batch AI compute for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
-
- Dataset: Burning Man 2016 dataset
- handling text data with preprocess flag
- Reading data from a blob store for remote executions
- using pandas dataframes for reading data
-
auto-ml-missing-data-blacklist-early-termination.ipynb
- Dataset: scikit learn's digit dataset
- Blacklist certain pipelines
- Specify a target metrics to indicate stopping criteria
- Handling Missing Data in the input
-
auto-ml-sparse-data-train-test-split.ipynb
- Dataset: Scikit learn's 20newsgroup
- Handle sparse datasets
- Specify custom train and validation set
-
auto-ml-exploring-previous-runs.ipynb
- List all projects for the workspace
- List all AutoML Runs for a given project
- Get details for a AutoML Run. (Automl settings, run widget & all metrics)
- Download fitted pipeline for any iteration
-
auto-ml-remote-execution-with-datastore.ipynb
- Dataset: scikit learn's digit dataset
- Download the data and store it in DataStore.
-
auto-ml-classification-with-deployment.ipynb
- Dataset: scikit learn's digit dataset
- Simple example of using Auto ML for classification
- Registering the model
- Creating Image and creating aci service
- Testing the aci service
-
- How to specifying sample_weight
- The difference that it makes to test results
-
- Using DataPrep for reading data
-
auto-ml-dataprep-remote-execution.ipynb
- Using DataPrep for reading data with remote execution
-
auto-ml-classification-local-azuredatabricks.ipynb
- Dataset: scikit learn's digit dataset
- Example of using AutoML for classification using Azure Databricks as the platform for training
-
auto-ml-classification_with_tensorflow.ipynb
- Dataset: scikit learn's digit dataset
- Simple example of using Auto ML for classification with whitelisting tensorflow models.checkout
- Uses local compute for training
-
- Dataset: NYC energy demand data
- Example of using AutoML for training a forecasting model
-
- Dataset: Dominick's grocery sales of orange juice
- Example of training an AutoML forecasting model on multiple time-series
Documentation
Table of Contents
- Automated ML Settings
- Cross validation split options
- Get Data Syntax
- Data pre-processing and featurization
Automated ML Settings
List of models for white list/blacklist
Classification
LogisticRegression
SGD
MultinomialNaiveBayes
BernoulliNaiveBayes
SVM
LinearSVM
KNN
DecisionTree
RandomForest
ExtremeRandomTrees
LightGBM
GradientBoosting
TensorFlowDNN
TensorFlowLinearClassifier
Regression
ElasticNet
GradientBoosting
DecisionTree
KNN
LassoLars
SGD
RandomForest
ExtremeRandomTrees
LightGBM
TensorFlowLinearRegressor
TensorFlowDNN
Cross validation split options
K-Folds Cross Validation
Use n_cross_validations setting to specify the number of cross validations. The training data set will be randomly split into n_cross_validations folds of equal size. During each cross validation round, one of the folds will be used for validation of the model trained on the remaining folds. This process repeats for n_cross_validations rounds until each fold is used once as validation set. Finally, the average scores accross all n_cross_validations rounds will be reported, and the corresponding model will be retrained on the whole training data set.
Monte Carlo Cross Validation (a.k.a. Repeated Random Sub-Sampling)
Use validation_size to specify the percentage of the training data set that should be used for validation, and use n_cross_validations to specify the number of cross validations. During each cross validation round, a subset of size validation_size will be randomly selected for validation of the model trained on the remaining data. Finally, the average scores accross all n_cross_validations rounds will be reported, and the corresponding model will be retrained on the whole training data set.
Custom train and validation set
You can specify seperate train and validation set either through the get_data() or directly to the fit method.
get_data() syntax
The get_data() function can be used to return a dictionary with these values:
| Key | Type | Dependency | Mutually Exclusive with | Description |
|---|---|---|---|---|
| X | Pandas Dataframe or Numpy Array | y | data_train, label, columns | All features to train with |
| y | Pandas Dataframe or Numpy Array | X | label | Label data to train with. For classification, this should be an array of integers. |
| X_valid | Pandas Dataframe or Numpy Array | X, y, y_valid | data_train, label | Optional All features to validate with. If this is not specified, X is split between train and validate |
| y_valid | Pandas Dataframe or Numpy Array | X, y, X_valid | data_train, label | Optional The label data to validate with. If this is not specified, y is split between train and validate |
| sample_weight | Pandas Dataframe or Numpy Array | y | data_train, label, columns | Optional A weight value for each label. Higher values indicate that the sample is more important. |
| sample_weight_valid | Pandas Dataframe or Numpy Array | y_valid | data_train, label, columns | Optional A weight value for each validation label. Higher values indicate that the sample is more important. If this is not specified, sample_weight is split between train and validate |
| data_train | Pandas Dataframe | label | X, y, X_valid, y_valid | All data (features+label) to train with |
| label | string | data_train | X, y, X_valid, y_valid | Which column in data_train represents the label |
| columns | Array of strings | data_train | Optional Whitelist of columns to use for features | |
| cv_splits_indices | Array of integers | data_train | Optional List of indexes to split the data for cross validation |
Data pre-processing and featurization
If you use preprocess=True, the following data preprocessing steps are performed automatically for you:
- Dropping high cardinality or no variance features
- Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs).
- Missing value imputation
- For numerical features, missing values are imputed with average of values in the column.
- For categorical features, missing values are imputed with most frequent value.
- Generating additional features
- For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.
- For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer.
- Transformations and encodings
- Numeric features with very few unique values are transformed into categorical features.
Running using python command
Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file. You can then run this file using the python command. However, on Windows the file needs to be modified before it can be run. The following condition must be added to the main code in the file:
if __name__ == "__main__":
The main code of the file must be indented so that it is under this condition.
Troubleshooting
Iterations fail and the log contains "MemoryError"
This can be caused by insufficient memory on the DSVM. AutoML loads all training data into memory. So, the available memory should be more than the training data size. If you are using a remote DSVM, memory is needed for each concurrent iteration. The max_concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and max_concurrent_iterations is set to 10, the minimum memory required is at least 80Gb. To resolve this issue, allocate a DSVM with more memory or reduce the value specified for max_concurrent_iterations.
Iterations show as "Not Responding" in the RunDetails widget.
This can be caused by too many concurrent iterations for a remote DSVM. Each concurrent iteration usually takes 100% of a core when it is running. Some iterations can use multiple cores. So, the max_concurrent_iterations setting should always be less than the number of cores of the DSVM. To resolve this issue, try reducing the value specified for the max_concurrent_iterations setting.