×

zbMATH — the first resource for mathematics

Regularized least square regression with dependent samples. (English) Zbl 1191.68535
Summary: We study the learning performance of regularized least square regression with \(\alpha \)-mixing and \(\varphi \)-mixing inputs. The capacity independent error bounds and learning rates are derived by means of an integral operator technique. Even for independent samples, our learning rates improve those in the literature. The results are sharp in the sense that when the mixing conditions are strong enough the rates are shown to be close to or the same as those for learning with independent samples. They also reveal interesting phenomena of learning with dependent samples: (i) dependent samples contain less information and lead to worse error bounds than independent samples; (ii) the influence of the dependence between samples to the learning process decreases as the smoothness of the target function increases.

MSC:
68T05 Learning and adaptive systems in artificial intelligence
68P05 Data structures
42B10 Fourier and Fourier-Stieltjes transforms and other transforms of Fourier type
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Aronszajn, N.: Theory of reproducing kernels. Trans. Amer. Math. Soc. 68, 337–404 (1950) · Zbl 0037.20701 · doi:10.1090/S0002-9947-1950-0051437-7
[2] Athreya, K.B., Pantula, S.G.: Mixing properties of Harris chains and autoregressive processes. J. Appl. Probab. 23, 880–892 (1986) · Zbl 0623.60087 · doi:10.2307/3214462
[3] Bartlett, P.L., Mendelson, S.: Rademacher and Gaussian complexities: risk bounds and structural results. J. Mach. Learn. Res. 3, 463–482 (2002) · Zbl 1084.68549 · doi:10.1162/153244303321897690
[4] Billingsley, P.: Convergence of Probability Measures. Wiley, New York (1968) · Zbl 0172.21201
[5] Bousquet, O., Elisseeff, A.: Stability and generalization. J. Mach. Learn. Res. 2, 499–526 (2002) · Zbl 1007.68083 · doi:10.1162/153244302760200704
[6] Cucker, F., Zhou, D.X.: Learning Theory: An Approximation Theory Viewpoint. Cambridge University Press, Cambridge (2007) · Zbl 1274.41001
[7] Davydov, Y.A.: The invariance principle for stationary processes. Theory Probab. Appl. 14, 487–498 (1970) · Zbl 0219.60030 · doi:10.1137/1115050
[8] Dehling, H., Philipp, W.: Almost sure invariance principles for weakly dependent vector-valued random variables. Ann. Probab. 10, 689–701 (1982) · Zbl 0487.60006 · doi:10.1214/aop/1176993777
[9] Evgeniou, T., Pontil, M., Poggio, T.: Regularization networks and support vector machines. Adv. Comput. Math. 13, 1–50 (2000) · Zbl 0939.68098 · doi:10.1023/A:1018946025316
[10] Li, L.Q., Wan, C.G.: Support vector machines with beta-mixing input sequences. In: Wang, J., et al. (eds.) Lecture Notes on Computer Science, vol. 3971, pp. 928–935. Springer, New York (2006)
[11] Modha, D.S.: Minimum complexity regression estimation with weakly dependent observations. IEEE. Trans. Inform. Theory 42, 2133–2145 (1996) · Zbl 0868.62015 · doi:10.1109/18.556602
[12] Smale, S., Zhou, D.X.: Shannon sampling and function reconstruction from point values. Bull. Amer. Math. Soc. 41, 279–305 (2004) · Zbl 1107.94007 · doi:10.1090/S0273-0979-04-01025-0
[13] Smale, S., Zhou, D.X.: Shannon sampling II: connections to learning theory. Appl. Comput. Harmon. Anal. 19, 285–302 (2005) · Zbl 1107.94008 · doi:10.1016/j.acha.2005.03.001
[14] Smale, S., Zhou, D.X.: Learning theory estimates via integral operators and their approximations. Constr. Approx. 26, 153–172 (2007) · Zbl 1127.68088 · doi:10.1007/s00365-006-0659-y
[15] Vidyasagar, M.: Learning and Generalization with Applications to Neural Networks. Springer, Berlin Heidelberg New York (2003) · Zbl 1008.68102
[16] Withers, C.S.: Connectionist nonparametric regression: multilayer feedforward networks can learn arbitrary mappings. Neural Netw. 3, 535–549 (2000)
[17] Wu, Q., Ying, Y.M., Zhou, D.X.: Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192 (2006) · Zbl 1100.68100 · doi:10.1007/s10208-004-0155-9
[18] Xu, Y.L., Chen, D.R.: Learning rates of regularized regression for exponentially strongly mixing sequence. J. Statist. Plann. Inference 138(7), 2180–2189 (2008) · Zbl 1134.62050
[19] Zhang, T.: Leave-one-out bounds for kernel methods. Neural Comput. 15, 1397–1437 (2003) · Zbl 1085.68144 · doi:10.1162/089976603321780326
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.