×

zbMATH — the first resource for mathematics

Learning rates for the risk of kernel-based quantile regression estimators in additive models. (English) Zbl 1338.62077

MSC:
62G05 Nonparametric estimation
62G08 Nonparametric regression and quantile regression
68Q32 Computational learning theory
Software:
hgam
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] 1. F. Bach, Consistency of the group Lasso and multiple kernel learning, J. Mach. Learn. Res.9 (2008) 1179-1225. genRefLink(128, ’S0219530515500050BIB001’, ’000258646300008’); · Zbl 1225.68147
[2] 2. B. E. Boser, I. Guyon and V. Vapnik, A training algorithm for optimal margin classifiers, in Proc. Fifth Annual ACM Workshop on Computational Learning Theory (ACM, Madison, WI, 1992), pp. 144-152.
[3] 3. A. Christmann and R. Hable, Consistency of support vector machines using additive kernels for additive models, Comput. Statist. Data Anal.56 (2012) 854-873. genRefLink(16, ’S0219530515500050BIB003’, ’10.1016%252Fj.csda.2011.04.006’); genRefLink(128, ’S0219530515500050BIB003’, ’000301214900009’);
[4] 4. A. Christmann, A. Van Messem and I. Steinwart, On consistency and robustness properties of support vector machines for heavy-tailed distributions, Stat. Interface2 (2009) 311-327. genRefLink(16, ’S0219530515500050BIB004’, ’10.4310%252FSII.2009.v2.n3.a5’); genRefLink(128, ’S0219530515500050BIB004’, ’000282650400006’); · Zbl 1245.62057
[5] 5. C. Cortes and V. Vapnik, Support vector networks, Mach. Learn.20 (1995) 273-297. genRefLink(128, ’S0219530515500050BIB005’, ’A1995RX35400003’); · Zbl 0831.68098
[6] 6. F. Cucker and D.-X. Zhou, Learning Theory. An Approximation Theory Viewpoint (Cambridge University Press, Cambridge, 2007). genRefLink(16, ’S0219530515500050BIB006’, ’10.1017%252FCBO9780511618796’); · Zbl 1274.41001
[7] 7. M. Eberts and I. Steinwart, Optimal regression rates for SVMs using Gaussian kernels, Electron. J. Statist.7 (2013) 1-42. genRefLink(16, ’S0219530515500050BIB007’, ’10.1214%252F12-EJS760’); genRefLink(128, ’S0219530515500050BIB007’, ’000321047700001’);
[8] 8. D. Edmunds and H. Triebel, Function Spaces, Entropy Numbers, Differential Operators (Cambridge University Press, Cambridge, 1996). genRefLink(16, ’S0219530515500050BIB008’, ’10.1017%252FCBO9780511662201’); · Zbl 0865.46020
[9] 9. T. Hastie and R. Tibshirani, Generalized additive models, Statist. Sci.1 (1986) 297-318. genRefLink(16, ’S0219530515500050BIB009’, ’10.1214%252Fss%252F1177013604’); · Zbl 0645.62068
[10] 10. T. J. Hastie and R. J. Tibshirani, Generalized Additive Models (CRC Press, 1990). · Zbl 0747.62061
[11] 11. T. Hofmann, B. Schölkopf and A. J. Smola, Kernel methods in machine learning, Ann. Statist.36 (2008) 1171-1220. genRefLink(16, ’S0219530515500050BIB011’, ’10.1214%252F009053607000000677’); genRefLink(128, ’S0219530515500050BIB011’, ’000256504400007’);
[12] 12. T. Hu, Online regression with varying Gaussians and non-identical distributions, Anal. Appl.9 (2011) 395-408. [Abstract] genRefLink(128, ’S0219530515500050BIB012’, ’000296621900003’); · Zbl 1253.68189
[13] 13. T. Hu, J. Fan, Q. Wu and D.-X. Zhou, Regularization schemes for minimum error entropy principle, Anal. Appl., published online (2014); DOI: 10.1142/S0219530514500110 . genRefLink(128, ’S0219530515500050BIB013’, ’000353694400006’);
[14] 14. P. J. Huber, The behavior of maximum likelihood estimates under nonstandard conditions, in Proc. 5th Berkeley Symp. on Math. Statist. and Probab., Vol. 1 (1967), pp. 221-233. · Zbl 0212.21504
[15] 15. R. Koenker, Quantile Regression (Cambridge University Press, Cambridge, 2005). genRefLink(16, ’S0219530515500050BIB015’, ’10.1017%252FCBO9780511754098’);
[16] 16. R. Koenker and G. Bassett, Regression quantiles, Econometrica46 (1978) 33-50. genRefLink(16, ’S0219530515500050BIB016’, ’10.2307%252F1913643’); genRefLink(128, ’S0219530515500050BIB016’, ’A1978EK96200003’);
[17] 17. V. Koltchinskii and M. Yuan, Sparse recovery in large ensembles of kernel machines, in Proc. 21st Annual Conf. Learning Theory (COLT 2008), Finland (2008), pp. 229-238.
[18] 18. Y. Lin and H. H. Zhang, Component selection and smoothing in multivariate nonparametric regression, Ann. Statist.34 (2006) 2272-2297. genRefLink(16, ’S0219530515500050BIB018’, ’10.1214%252F009053606000000722’); genRefLink(128, ’S0219530515500050BIB018’, ’000244258700012’);
[19] 19. L. Meier, S. van de Geer and P. Bühlmann, High-dimensional additive modeling, Ann. Statist.37 (2009) 3779-3821. genRefLink(16, ’S0219530515500050BIB019’, ’10.1214%252F09-AOS692’); genRefLink(128, ’S0219530515500050BIB019’, ’000271673700003’);
[20] 20. H. Q. Minh, Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory, Constr. Approx.32 (2010) 307-338. genRefLink(16, ’S0219530515500050BIB020’, ’10.1007%252Fs00365-009-9080-0’); genRefLink(128, ’S0219530515500050BIB020’, ’000280808800005’);
[21] 21. T. Poggio and F. Girosi, A theory of networks for approximation and learning, Proc. IEEE78 (1990) 1481-1497. genRefLink(16, ’S0219530515500050BIB021’, ’10.1109%252F5.58326’); genRefLink(128, ’S0219530515500050BIB021’, ’A1990EC03500005’);
[22] 22. G. Raskutti, M. J. Wainwright and B. Yu, Minimax-optimal rates for sparse additive models over kernel classes via convex programming, J. Mach. Learn. Res.13 (2012) 389-427. genRefLink(128, ’S0219530515500050BIB022’, ’000303046000006’); · Zbl 1283.62071
[23] 23. B. Schölkopf and A. J. Smola, Learning with Kernels (MIT Press, Cambridge, MA, 2002).
[24] 24. B. Schölkopf, A. J. Smola, R. C. Williamson and P. L. Bartlett, New support vector algorithms, Neural Comput.12 (2000) 1207-1245. genRefLink(16, ’S0219530515500050BIB024’, ’10.1162%252F089976600300015565’); genRefLink(128, ’S0219530515500050BIB024’, ’000087725000010’);
[25] 25. S. Smale and D.-X. Zhou, Shannon sampling II. Connections to learning theory, Appl. Comput. Harmonic Anal.19 (2005) 285-302. genRefLink(16, ’S0219530515500050BIB025’, ’10.1016%252Fj.acha.2005.03.001’); genRefLink(128, ’S0219530515500050BIB025’, ’000234001200002’); · Zbl 1107.94008
[26] 26. I. Steinwart and A. Christmann, Support Vector Machines (Springer, New York, 2008).
[27] 27. I. Steinwart and A. Christmann, How SVMs can estimate quantiles and the median, Adv. Neural Inf. Process. Syst.20 (2008) 305-312.
[28] 28. I. Steinwart and A. Christmann, Estimating conditional quantiles with the help of the pinball loss, Bernoulli17 (2011) 211-225. genRefLink(16, ’S0219530515500050BIB028’, ’10.3150%252F10-BEJ267’); genRefLink(128, ’S0219530515500050BIB028’, ’000288042800010’);
[29] 29. I. Steinwart and C. Scovel, Fast rates for support vector machines using Gaussian kernels, Ann. Statist.35 (2007) 575-607. genRefLink(16, ’S0219530515500050BIB029’, ’10.1214%252F009053606000001226’); genRefLink(128, ’S0219530515500050BIB029’, ’000248987600005’);
[30] 30. C. J. Stone, Additive regression and other nonparametric models, Ann. Statist.13 (1985) 689-705. genRefLink(16, ’S0219530515500050BIB030’, ’10.1214%252Faos%252F1176349548’); genRefLink(128, ’S0219530515500050BIB030’, ’A1985AKQ9500030’);
[31] 31. H. Sun and Q. Wu, Indefinite kernel network with dependent sampling, Anal. Appl.11 (2013) 1350020, 15 pp. genRefLink(16, ’S0219530515500050BIB031’, ’10.1016%252Fj.jmaa.2013.01.009’);
[32] 32. J. A. K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor and J. Vandewalle, Least Squares Support Vector Machines (World Scientific, Singapore, 2002). [Abstract] · Zbl 1017.93004
[33] 33. T. Suzuki and M. Sugiyama, Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness, Ann. Statist.41 (2013) 1381-1405. genRefLink(16, ’S0219530515500050BIB033’, ’10.1214%252F13-AOS1095’); genRefLink(128, ’S0219530515500050BIB033’, ’000323271500001’);
[34] 34. I. Takeuchi, Q. V. Le, T. D. Sears and A. J. Smola, Nonparametric quantile estimation, J. Mach. Learn. Res.7 (2006) 1231-1264. genRefLink(128, ’S0219530515500050BIB034’, ’000245388800004’); · Zbl 1222.68316
[35] 35. V. N. Vapnik, The Nature of Statistical Learning Theory (Springer, New York, 1995). genRefLink(16, ’S0219530515500050BIB035’, ’10.1007%252F978-1-4757-2440-0’); · Zbl 0833.62008
[36] 36. V. N. Vapnik, Statistical Learning Theory (Wiley, New York, 1998). · Zbl 0935.62007
[37] 37. V. N. Vapnik and A. Lerner, Pattern recognition using generalized portrait method, Autom. Remote Control24 (1963) 774-780.
[38] 38. G. Wahba, Support vector machines, reproducing kernel Hilbert spaces and the randomized GACV, in Advances in Kernel Methods – Support Vector Learning, eds. B. Schölkopf, C. J. C. Burges and A. J. Smola (MIT Press, Cambridge, MA, 1999), pp. 69-88.
[39] 39. H. Wendland, Scattered Data Approximation (Cambridge University Press, Cambridge, 2005). · Zbl 1075.65021
[40] 40. Q. Wu, Y. M. Ying and D.-X. Zhou, Learning rates of least square regularized regression, Found. Comput. Math.6 (2006) 171-192. genRefLink(16, ’S0219530515500050BIB040’, ’10.1007%252Fs10208-004-0155-9’); genRefLink(128, ’S0219530515500050BIB040’, ’000238289600002’);
[41] 41. Q. Wu, Y. M. Ying and D.-X. Zhou, Multi-kernel regularized classifiers, J. Complexity23 (2007) 108-134. genRefLink(16, ’S0219530515500050BIB041’, ’10.1016%252Fj.jco.2006.06.007’); genRefLink(128, ’S0219530515500050BIB041’, ’000245344800006’);
[42] 42. D. H. Xiang, Conditional quantiles with varying Gaussians, Adv. Comput. Math.38 (2013) 723-735. genRefLink(16, ’S0219530515500050BIB042’, ’10.1007%252Fs10444-011-9257-5’); genRefLink(128, ’S0219530515500050BIB042’, ’000317972100003’);
[43] 43. D. H. Xiang and D.-X. Zhou, Classification with Gaussians and convex loss, J. Mach. Learn. Res.10 (2009) 1447-1468. genRefLink(128, ’S0219530515500050BIB043’, ’000270825000005’); · Zbl 1235.68207
[44] 44. D.-X. Zhou, Capacity of reproducing kernel spaces in learning theory, IEEE Trans. Inform. Theory49 (2003) 1743-1752. genRefLink(16, ’S0219530515500050BIB044’, ’10.1109%252FTIT.2003.813564’); genRefLink(128, ’S0219530515500050BIB044’, ’000183766000010’);
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.