Stability and generalization. (English) Zbl 1007.68083

Summary: We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular, we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification.


68Q32 Computational learning theory
68T05 Learning and adaptive systems in artificial intelligence
68W05 Nonnumerical algorithms
Full Text: DOI