×

Learning rates of regularized regression for exponentially strongly mixing sequence. (English) Zbl 1134.62050

Summary: The study of regularized learning algorithms associated with least squared loss is one of the very important issues. Q. Wu et al. [Learning rates of least-square regularized regression. Found. Comput. Math. 6, No. 2, 171–192 (2006; Zbl 1100.68100)] established fast learning rates \(m^{-\theta }\) for the least squares regularized regression in reproducing kernel Hilbert spaces under some assumptions on Mercer kernels and on regression functions, where \(m\) denoted the number of the samples and \(\theta \) may be arbitrarily close to 1. They assumed, as in most existing works, that the set of samples were drawn independently from the underlying probability. However, independence is a very restrictive concept. Without the independence of samples, the study of learning algorithms is more involved, and little progress has been made. The aim of this paper is to establish the above results of Wu et al. for dependent samples. The dependence of samples in this paper is expressed in terms of exponentially strongly mixing sequences.

MSC:

62J99 Linear inference, regression
68T05 Learning and adaptive systems in artificial intelligence
46N30 Applications of functional analysis in probability theory and statistics
65C60 Computational problems in statistics (MSC2010)

Citations:

Zbl 1100.68100
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Chen, D. R.; Wu, Q.; Ying, Y. M.; Zhou, D. X., Support vector machine soft margin classifiers: error analysis, J. Mach. Learning Res., 5, 1143-1175 (2004) · Zbl 1222.68167
[2] Cuker, F.; Smale, S., On the mathematical foundations of learning, Bull. Amer. Math. Soc., 39, 1-49 (2001) · Zbl 0983.68162
[3] Cuker, F.; Smale, S., Best choice for regularization parameters in learning theory: on the bias-variance problem, Found. Comput. Math., 2, 413-428 (2002) · Zbl 1057.68085
[4] Davydov, Y. A., Mixing conditions for Markov chains, Theory Probab. Appl., XVIII, 312-328 (1973) · Zbl 0297.60031
[5] Doukhan, P., Mixing Properties and Examples (1995), Springer: Springer New York
[6] Evgeniou, T.; Pontil, M.; Poggio, T., Regularization networks and support vector machines, Adv. Comput. Math., 13, 1-50 (2000) · Zbl 0939.68098
[7] Modha, S.; Masry, E., Minimum complexity regression estimation with weakly dependent observations, IEEE Trans. Inform. Theory, 42, 2133-2145 (1996) · Zbl 0868.62015
[8] Smale, S.; Zhou, D. X., Shannon sampling and function reconstruction from point values, Bull. Amer. Math. Soc., 41, 279-305 (2004) · Zbl 1107.94007
[9] Smale, S., Zhou, D.X., 2007. Learning theory estimates via integral operators and their approximations. Constr. Approx. doi: 10. 1007/s00365-006-0659-y.; Smale, S., Zhou, D.X., 2007. Learning theory estimates via integral operators and their approximations. Constr. Approx. doi: 10. 1007/s00365-006-0659-y. · Zbl 1127.68088
[10] Steinwart, I., Scovel, C., 2005. fast rates for support vector machines using Gaussian kernels. In: Proceedings of the 18th Conference on Learning Theory.; Steinwart, I., Scovel, C., 2005. fast rates for support vector machines using Gaussian kernels. In: Proceedings of the 18th Conference on Learning Theory. · Zbl 1137.68564
[11] Vidyasagar, M., Learning and Generalization with Application to Neural Networks (2003), Springer: Springer London · Zbl 1008.68102
[12] Withers, C. S., Conditions for linear processes to be strong-mixing, Probab. Theory Related Fields, 57, 477-480 (1981) · Zbl 0465.60032
[13] Wu, Q.; Ying, Y. M.; Zhou, D. X., Learning rates of least-square regularized regression, Found. Comput. Math., 6, 171-192 (2006) · Zbl 1100.68100
[14] Yu, B., Rates of convergence for empirical processes of stationary mixing sequences, Ann. Probab., 22, 94-116 (1994) · Zbl 0802.60024
[15] Zou, B.; Li, L.-Q., The performance bounds of learning machines based on exponentially strongly mixing sequences, Comput. Math. Appl., 53, 1050-1058 (2007) · Zbl 1151.68600
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.