Summary: We consider the problem of selecting a model having the best predictive ability among a class of linear models. The popular leave-one-out cross- validation method, which is asymptotically equivalent to many other model selection methods such as the Akaike information criterion (AIC), the , and the bootstrap, is asymptotically inconsistent in the sense that the probability of selecting the model with the best predictive ability does not converge to 1 as the total number of observations .
We show that the inconsistency of the leave-one-out cross-validation can be rectified by using a leave--out cross-validation with , the number of observations reserved for validation, satisfying as . This is a somewhat shocking discovery, because is totally opposite to the popular leave-one-out recipe in cross-validation. Motivations, justifications, and discussions of some practical aspects of the use of the leave--out cross-validation method are provided, and results from a simulation study are presented.