×

Stability results in learning theory. (English) Zbl 1101.68621

Summary: The problem of proving generalization bounds for the performance of learning algorithms can be formulated as a problem of bounding the bias and variance of estimators of the expected error. We show how various stability assumptions can be employed for this purpose. We provide a necessary and sufficient stability condition for bounding the bias and variance for the Empirical Risk Minimization algorithm, and various sufficient conditions for bounding bias and variance of estimators for general algorithms. We discuss settings in which it is possible to obtain exponential bounds, and we prove an extension of the bounded-difference inequality for “almost always” stable algorithms.

MSC:

68Q32 Computational learning theory
53C35 Differential geometry of symmetric spaces
57S20 Noncompact Lie groups of transformations
91A26 Rationality and learning in game theory
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] DOI: 10.1214/009117904000000856 · Zbl 1074.60018 · doi:10.1214/009117904000000856
[2] Bousquet O., J. Mach. Learn. Res. 2 pp 499–
[3] DOI: 10.1109/TIT.1979.1056087 · Zbl 0432.62040 · doi:10.1109/TIT.1979.1056087
[4] DOI: 10.1007/978-1-4612-0711-5 · doi:10.1007/978-1-4612-0711-5
[5] DOI: 10.1017/CBO9781107359949.008 · doi:10.1017/CBO9781107359949.008
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.