×

zbMATH — the first resource for mathematics

Optimal learning rates for least squares regularized regression with unbounded sampling. (English) Zbl 1217.65024
Authors’ abstract: A standard assumption in the theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on the moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error.

MSC:
65C60 Computational problems in statistics (MSC2010)
62J05 Linear regression; mixed models
46E22 Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces)
68T05 Learning and adaptive systems in artificial intelligence
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Bennett, G., Probability inequalities for the sum of independent random variables, J. amer. statist. assoc., 57, 33-45, (1962) · Zbl 0104.11905
[2] Caponnetto, A.; De Vito, E., Optimal rates for regularized least-squares algorithms, Found. comput. math., 7, 331-368, (2007) · Zbl 1129.68058
[3] Chen, D.R.; Wu, Q.; Ying, Y.; Zhou, D.X., Support vector machine soft margin classifiers: error analysis, J. Mach. learn. res., 5, 1143-1175, (2004) · Zbl 1222.68167
[4] De Vito, E.; Caponnetto, A.; Rosasco, L., Model selection for regularized least-squares algorithm in learning theory, Found. comput. math., 5, 59-85, (2005) · Zbl 1083.68106
[5] Mendelson, S.; Neeman, J., Regularization in kernel learning, Ann. statist., 38, 526-565, (2010) · Zbl 1191.68356
[6] Smale, S.; Zhou, D.X., Learning theory estimates via integral operators and their approximations, Constr. approx., 26, 153-172, (2007) · Zbl 1127.68088
[7] Smale, S.; Zhou, D.X., Estimating the approximation error in learning theory, Anal. appl., 1, 17-41, (2003) · Zbl 1079.68089
[8] Smale, S.; Zhou, D.X., Online learning with Markov sampling, Anal. appl., 7, 87-113, (2009) · Zbl 1170.68022
[9] Steinwart, I.; Christmann, A., Support vector machines, (2008), Springer-Verlag New York · Zbl 1203.68171
[10] I. Steinwart, D. Hush, C. Scovel, A new concentration result for regularized risk minimizers, E. GinĂ©, V. Kolchinskii, W. Li, J. Zinn (Eds.), High Dimensional Probability IV, Institute of Mathematical Statistics, Beachwood, 2006 pp. 260-275. · Zbl 1127.68090
[11] I. Steinwart, D. Hush, C. Scovel, Optimal rates for regularized least-squares regression, in: S. Dasgupta, A. Klivans (Eds.), Proceedings of the 22nd Annual Conference on Learning Theory, 2009, pp. 79-93.
[12] Steinwart, I.; Scovel, C., Fast rates for support vector machines, Lecture notes in comput. sci., 3559, 279-294, (2005) · Zbl 1137.68564
[13] Wu, Q.; Ying, Y.; Zhou, D.X., Learning rates of least-square regularized regression, Found. comput. math., 6, 171-192, (2006) · Zbl 1100.68100
[14] Wu, Q.; Ying, Y.; Zhou, D.X., Multi-kernel regularized classifiers, J. complexity, 23, 108-134, (2007) · Zbl 1171.65043
[15] Wu, Q.; Zhou, D.X., Analysis of support vector machine classification, J. comput. anal. appl., 8, 99-119, (2006) · Zbl 1098.68680
[16] Ye, G.B.; Zhou, D.X., SVM learning and \(L^p\) approximation by gaussians on Riemannian manifolds, Anal. appl., 7, 309-339, (2009) · Zbl 1175.68346
[17] Zhang, T., Leave-one-out bounds for kernel methods, Neural comput., 15, 1397-1437, (2003) · Zbl 1085.68144
[18] Zhou, D.X., Capacity of reproducing kernel spaces in learning theory, IEEE trans. inform. theory, 49, 1743-1752, (2003) · Zbl 1290.62033
[19] Zhou, D.X., Derivative reproducing properties for kernel methods in learning theory, J. comput. appl. math., 220, 456-463, (2008) · Zbl 1152.68049
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.