Unknown macro: {next_previous_links}
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Tuning hyperparameters is a key concept in machine learning. You can perform hyperparameter tuning automatically using techniques such as grid search, random search, and gradient based optimization etc. This guide demonstrates an example on how to tune hyperparameters manually by performing a few tests using WSO2 Machine Learner. 


Goals of this guide

This guide uses the well-known Pima Indians Diabetes dataset and the Logistic Regression with mini batch gradient descent algorithm to perform the analysis. The hyperparameters of this algorithm are as follows.

HyperparameterDescription
Iterations

Number of times optimizer run before completing the optimization process.

Learning RateStep size of the optimization algorithm.
Regularization TypeType of the regularization. WSO2 Machine Learner supports L2 and L1 regularizations. 
Regularization ParameterRegularization parameter controls the model complexity. Hence, it helps to control model overfitting.
SGD Data FractionFraction of the training dataset used in a single iteration of the optimization algorithm.

This guide demonstrates the following goals on finding the optimal Learning Rate and the Number of Iterations, while keeping the other hyperparameters in the above list at a constant value.

  • Finding the optimal Learning Rate and the Number of Iterations which improves Area Under Curve (AUC). For more information on Area Under Curve of ROC Curve, see Model Evaluation Measures.

  • Finding the relationship between Learning Rate and AUC.

  • Finding the relationship between number of iterations and AUC.

Approach on tuning hyperparameters

The approach on how to achieve the above goals on tuning the hyperparameters is described below.

Finding the optimal Learning Rate and the Number of Iterations which improves AUC

Follow the steps below to understand a fair number for the iterations to find the optimal learning rate.

  1. Upload your dataset (e.g. Pima Indians Diabetes dataset) to WSO2 ML. For instructions on uploading the dataset to the ML, see Exploring Data.
  2. Create a project, and then generate a model by creating an analysis. For instructions, see Generating Models.

    Keep the Learning Rate at a fixed value (0.1), and vary the Number of Iterations in the Step 4 Parameters section of the model generating wizard in the WSO2 ML UI as shown below. 


    define hyper parameter values

  3.  

    Record the AUC value you obtain against each iterations number as shown in the example below.

     

    table on varying number of iterations

    You can get the AUC values from the Model Summary in the WSO2 ML UI as shown below. 

     

    view model summary

  4. Plot a graph using the results you obtained as shown in the example below.
    graph on varying number of iterations
    According to the above graph, AUC increases with the number of iterations. Hence, I picked 10000 as a fair number of iterations to find the optimal learning rate (of course I could have picked any number > 5000 (where learning rate started to climb over 0.5)). Increasing number of iterations extensively would lead to an overfitted model.

 



 

  • No labels