zbMATH — the first resource for mathematics

Examples
Geometry Search for the term Geometry in any field. Queries are case-independent.
Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact.
"Topological group" Phrases (multi-words) should be set in "straight quotation marks".
au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted.
Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff.
"Quasi* map*" py: 1989 The resulting documents have publication year 1989.
so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14.
"Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic.
dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles.
py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses).
la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.

Operators
a & b logic and
a | b logic or
!ab logic not
abc* right wildcard
"ab c" phrase
(ab c) parentheses
Fields
any anywhere an internal document identifier
au author, editor ai internal author identifier
ti title la language
so source ab review, abstract
py publication year rv reviewer
cc MSC code ut uncontrolled term
dt document type (j: journal article; b: book; a: book article)
Optimal learning rates for least squares regularized regression with unbounded sampling. (English) Zbl 1217.65024
Authors’ abstract: A standard assumption in the theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on the moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error.
MSC:
65C60Computational problems in statistics
62J05Linear regression
46E22Hilbert spaces with reproducing kernels
68T05Learning and adaptive systems
References:
[1]Bennett, G.: Probability inequalities for the sum of independent random variables, J. amer. Statist. assoc. 57, 33-45 (1962) · Zbl 0104.11905 · doi:10.2307/2282438
[2]Caponnetto, A.; De Vito, E.: Optimal rates for regularized least-squares algorithms, Found. comput. Math. 7, 331-368 (2007) · Zbl 1129.68058 · doi:10.1007/s10208-006-0196-8
[3]Chen, D. R.; Wu, Q.; Ying, Y.; Zhou, D. X.: Support vector machine soft margin classifiers: error analysis, J. Mach learn. Res. 5, 1143-1175 (2004) · Zbl 1222.68167 · doi:http://www.jmlr.org/papers/v5/chen04b.html
[4]De Vito, E.; Caponnetto, A.; Rosasco, L.: Model selection for regularized least-squares algorithm in learning theory, Found. comput. Math. 5, 59-85 (2005) · Zbl 1083.68106 · doi:10.1007/s10208-004-0134-1
[5]Mendelson, S.; Neeman, J.: Regularization in kernel learning, Ann. statist. 38, 526-565 (2010) · Zbl 1191.68356 · doi:10.1214/09-AOS728
[6]Smale, S.; Zhou, D. X.: Learning theory estimates via integral operators and their approximations, Constr. approx. 26, 153-172 (2007) · Zbl 1127.68088 · doi:10.1007/s00365-006-0659-y
[7]Smale, S.; Zhou, D. X.: Estimating the approximation error in learning theory, Anal. appl. 1, 17-41 (2003) · Zbl 1079.68089 · doi:10.1142/S0219530503000089
[8]Smale, S.; Zhou, D. X.: Online learning with Markov sampling, Anal. appl. 7, 87-113 (2009) · Zbl 1170.68022 · doi:10.1142/S0219530509001293
[9]Steinwart, I.; Christmann, A.: Support vector machines, (2008)
[10]I. Steinwart, D. Hush, C. Scovel, A new concentration result for regularized risk minimizers, E. Giné, V. Kolchinskii, W. Li, J. Zinn (Eds.), High Dimensional Probability IV, Institute of Mathematical Statistics, Beachwood, 2006 pp. 260–275.
[11]I. Steinwart, D. Hush, C. Scovel, Optimal rates for regularized least-squares regression, in: S. Dasgupta, A. Klivans (Eds.), Proceedings of the 22nd Annual Conference on Learning Theory, 2009, pp. 79–93.
[12]Steinwart, I.; Scovel, C.: Fast rates for support vector machines, Lecture notes in comput. Sci. 3559, 279-294 (2005) · Zbl 1137.68564 · doi:10.1007/b137542
[13]Wu, Q.; Ying, Y.; Zhou, D. X.: Learning rates of least-square regularized regression, Found. comput. Math. 6, 171-192 (2006) · Zbl 1100.68100 · doi:10.1007/s10208-004-0155-9
[14]Wu, Q.; Ying, Y.; Zhou, D. X.: Multi-kernel regularized classifiers, J. complexity 23, 108-134 (2007) · Zbl 1171.65043 · doi:10.1016/j.jco.2006.06.007
[15]Wu, Q.; Zhou, D. X.: Analysis of support vector machine classification, J. comput. Anal. appl. 8, 99-119 (2006) · Zbl 1098.68680
[16]Ye, G. B.; Zhou, D. X.: SVM learning and lp approximation by gaussians on Riemannian manifolds, Anal. appl. 7, 309-339 (2009) · Zbl 1175.68346 · doi:10.1142/S0219530509001384
[17]Zhang, T.: Leave-one-out bounds for kernel methods, Neural comput. 15, 1397-1437 (2003) · Zbl 1085.68144 · doi:10.1162/089976603321780326
[18]Zhou, D. X.: Capacity of reproducing kernel spaces in learning theory, IEEE trans. Inform. theory 49, 1743-1752 (2003)
[19]Zhou, D. X.: Derivative reproducing properties for kernel methods in learning theory, J. comput. Appl. math. 220, 456-463 (2008) · Zbl 1152.68049 · doi:10.1016/j.cam.2007.08.023