Leave-one-out cross validation is used in the field of machine learning to determine how accurately a learning algorithm will be able to predict data that it was not trained on. When using the leave-one-out method, the learning algorithm is trained multiple times, using all but one of the training set data points. The form of the algorithm is as follows:

For `k` = 1 to `R` (where `R` is the number of training set points)
- Temporarily remove the
`k`^{th} data point from the training set.
- Train the learning algorithm on the remaining
`R` - 1 points.
- Test the removed data point and note your error.

Calculate the mean error over all `R` data points.

Leave-one-out cross validation is useful because it does not waste data. When training, all but one of the points are used, so the resulting regression or classification rules are essentially the same as if they had been trained on all the data points. The main drawback to the leave-one-out method is that it is expensive - the computation must be repeated as many times as there are training set data points.