In machine learning, a hyperparameter is a parameter external to the ML model, which controls the learning process. A hyperparameter is not related to the internal workings of the ML model but rather indirectly affects the model's internal parameters via the ML model training (fitting) process. The usage of hyperparameter types vary depending on the type of machine learning models/algorithms. There are also some simple ML algorithms which do not require any hyperparameters. All machine learning models learn/deduct their model parameters (internal) from the training data in the dataset and are (indirectly) affected by the hyperparameters.

Hyperparamters are passed as parameters in the computing programming language libraries which implement the ML algorithms, for example the Keras library functions for artificial neural networks (ANN). Examples of model hyperparameters are the following:

There are various techniques for hyperparameter optimization. Hyperparameter optimization is achieved by perfoming multiple epochs of training, each epoch comprising multiple data subset iterations. Some of the most common hyperparameter optimization techniques are the following:

  • Grid search
  • Randomized search
  • Bayesian optimization
  • Genetic algorithms

Related Cloud terms