In machine learning, holdout validation is a data sampling method in which the dataset is split into two: the training dataset and the test. The split is equal, i.e. training is performed on the 50% of the dataset and testing is performed on the remaining 50% of the dataset. Holdout validation is not recommended in small datasets, due to potential high bias. Another cross validation method, such k-fold cross validation should be used in such cases.

Related Cloud terms