From d5c247b005199b642d8a13dc0a83a95615fef099 Mon Sep 17 00:00:00 2001 From: Roope Astala Date: Wed, 3 Oct 2018 14:23:49 -0400 Subject: [PATCH] Update automl readme --- automl/README.md | 118 +++++++++++++++++++++++++++++------------------ 1 file changed, 74 insertions(+), 44 deletions(-) diff --git a/automl/README.md b/automl/README.md index 729ad62d..4b844185 100644 --- a/automl/README.md +++ b/automl/README.md @@ -1,24 +1,52 @@ # Table of Contents -1. [Auto ML Introduction](#introduction) -2. [Running samples in a Local Conda environment](#localconda) -3. [Auto ML SDK Sample Notebooks](#samples) -4. [Documentation](#documentation) -5. [Running using python command](#pythoncommand) -6. [Troubleshooting](#troubleshooting) +1. [Automated ML Introduction](#introduction) +1. [Running samples in Azure Notebooks](#jupyter) +1. [Running samples in a Local Conda environment](#localconda) +1. [Automated ML SDK Sample Notebooks](#samples) +1. [Documentation](#documentation) +1. [Running using python command](#pythoncommand) +1. [Troubleshooting](#troubleshooting) + + +# Automated ML introduction +Automated machine learning (automated ML) builds high quality machine learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, automated ML will give you a high quality machine learning model that you can use for predictions. -# Auto ML Introduction -AutoML builds high quality Machine Learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, AutoML will give you a high quality machine learning model that you can use for predictions. If you are new to Data Science, AutoML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use. If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire. + +## Running samples in Azure Notebooks - Jupyter based notebooks in the Azure cloud -# Running samples in a Local Conda environment +1. [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://aka.ms/aml-clone-azure-notebooks) +[Import sample notebooks ](https://aka.ms/aml-clone-azure-notebooks) into Azure Notebooks. +1. Follow the instructions in the [../00.configuration](00.configuration.ipynb) notebook to create and connect to a workspace. +1. Open one of the sample notebooks. + + **Make sure the Azure Notebook kernel is set to `Python 3.6`** when you open a notebook. + + ![set kernel to Python 3.6](../images/python36.png) -You can run these notebooks in Azure Notebooks without any extra installation. To run these notebook on your own notebook server, use these installation instructions. + +## Running samples in a Local Conda environment + +To run these notebook on your own notebook server, use these installation instructions. + +The instructions below will install everything you need and then start a Jupyter notebook. To start your Jupyter notebook manually, use: + +``` +conda activate azure_automl +jupyter notebook +``` + +or on Mac: + +``` +source activate azure_automl +jupyter notebook +``` -It is best if you create a new conda environment locally to try this SDK, so it doesn't mess up with your existing Python environment. ### 1. Install mini-conda from [here](https://conda.io/miniconda.html), choose Python 3.7 or higher. - **Note**: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda. @@ -48,19 +76,19 @@ bash automl_setup_mac.sh cd to the **automl** folder where the sample notebooks were extracted and then run: ``` -automl_setup_linux.sh +bash automl_setup_linux.sh ``` ### 4. Running configuration.ipynb - Before running any samples you next need to run the configuration notebook. Click on 00.configuration.ipynb notebook -- Please make sure you use the Python [conda env:azure_automl] kernel when running this notebook. - Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (*instructions in notebook*) ### 5. Running Samples - Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks. - Follow the instructions in the individual notebooks to explore various features in AutoML -# Auto ML SDK Sample Notebooks + +# Automated ML SDK Sample Notebooks - [00.configuration.ipynb](00.configuration.ipynb) - Register Machine Learning Services Resource Provider - Create new Azure ML Workspace @@ -87,7 +115,7 @@ automl_setup_linux.sh - [03b.auto-ml-remote-batchai.ipynb](03b.auto-ml-remote-batchai.ipynb) - Dataset: scikit learn's [digit dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits) - - Example of using Auto ML for classification using a remote Batch AI compute for training + - Example of using automated ML for classification using a remote Batch AI compute for training - Parallel execution of iterations - Async tracking of progress - Cancelling individual iterations or entire run @@ -143,20 +171,17 @@ automl_setup_linux.sh - [13.auto-ml-dataprep.ipynb](13.auto-ml-dataprep.ipynb) - Using DataPrep for reading data -- [14a.auto-ml-classification-ensemble.ipynb](14a.auto-ml-classification-ensemble.ipynb) - - Classification with ensembling - -- [14b.auto-ml-regression-ensemble.ipynb](14b.auto-ml-regression-ensemble.ipynb) - - Regression with ensembling - -# Documentation + +# Documentation ## Table of Contents -1. [Auto ML Settings ](#automlsettings) -2. [Cross validation split options](#cvsplits) -3. [Get Data Syntax](#getdata) -4. [Data pre-processing and featurization](#preprocessing) +1. [Automated ML Settings ](#automlsettings) +1. [Cross validation split options](#cvsplits) +1. [Get Data Syntax](#getdata) +1. [Data pre-processing and featurization](#preprocessing) + + +## Automated ML Settings -## Auto ML Settings |Property|Description|Default| |-|-|-| |**primary_metric**|This is the metric that you want to optimize.

Classification supports the following primary metrics
accuracy
AUC_weighted
balanced_accuracy
average_precision_score_weighted
precision_score_weighted

Regression supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error
normalized_root_mean_squared_log_error| Classification: accuracy

Regression: spearman_correlation @@ -170,7 +195,8 @@ automl_setup_linux.sh |**exit_score**|*double* value indicating the target for *primary_metric*.
Once the target is surpassed the run terminates|None| |**blacklist_algos**|*Array* of *strings* indicating pipelines to ignore for Auto ML.

Allowed values for **Classification**
LogisticRegression
SGDClassifierWrapper
NBWrapper
BernoulliNB
SVCWrapper
LinearSVMWrapper
KNeighborsClassifier
DecisionTreeClassifier
RandomForestClassifier
ExtraTreesClassifier
gradient boosting
LightGBMClassifier

Allowed values for **Regression**
ElasticNet
GradientBoostingRegressor
DecisionTreeRegressor
KNeighborsRegressor
LassoLars
SGDRegressor
RandomForestRegressor
ExtraTreesRegressor|None| -## Cross validation split options + +## Cross validation split options ### K-Folds Cross Validation Use *n_cross_validations* setting to specify the number of cross validations. The training data set will be randomly split into *n_cross_validations* folds of equal size. During each cross validation round, one of the folds will be used for validation of the model trained on the remaining folds. This process repeats for *n_cross_validations* rounds until each fold is used once as validation set. Finally, the average scores accross all *n_cross_validations* rounds will be reported, and the corresponding model will be retrained on the whole training data set. @@ -180,7 +206,8 @@ Use *validation_size* to specify the percentage of the training data set that sh ### Custom train and validation set You can specify seperate train and validation set either through the get_data() or directly to the fit method. -## get_data() syntax + +## get_data() syntax The *get_data()* function can be used to return a dictionary with these values: |Key|Type|Dependency|Mutually Exclusive with|Description| @@ -196,21 +223,23 @@ The *get_data()* function can be used to return a dictionary with these values: |columns|Array of strings|data_train||*Optional* Whitelist of columns to use for features| |cv_splits_indices|Array of integers|data_train||*Optional* List of indexes to split the data for cross validation| -## Data pre-processing and featurization -If you use "preprocess=True", the following data preprocessing steps are performed automatically for you: -### 1. Dropping high cardinality or no variance features -- Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs). -### 2. Missing value imputation -- For numerical features, missing values are imputed with average of values in the column. -- For categorical features, missing values are imputed with most frequent value. -### 3. Generating additional features -- For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second. -- For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer. -### 4. Transformations and encodings -- Numeric features with very few unique values are transformed into categorical features. -- Depending on cardinality of categorical features label encoding or (hashing) one-hot encoding is performed. + +## Data pre-processing and featurization +If you use `preprocess=True`, the following data preprocessing steps are performed automatically for you: -# Running using python command +1. Dropping high cardinality or no variance features + - Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs). +2. Missing value imputation + - For numerical features, missing values are imputed with average of values in the column. + - For categorical features, missing values are imputed with most frequent value. +3. Generating additional features + - For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second. + - For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer. +4. Transformations and encodings + - Numeric features with very few unique values are transformed into categorical features. + + +# Running using python command Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file. You can then run this file using the python command. However, on Windows the file needs to be modified before it can be run. @@ -220,7 +249,8 @@ The following condition must be added to the main code in the file: The main code of the file must be indented so that it is under this condition. -# Troubleshooting + +# Troubleshooting ## Iterations fail and the log contains "MemoryError" This can be caused by insufficient memory on the DSVM. AutoML loads all training data into memory. So, the available memory should be more than the training data size. If you are using a remote DSVM, memory is needed for each concurrent iteration. The concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and concurrent_iterations is set to 10, the minimum memory required is at least 80Gb.