×

zbMATH — the first resource for mathematics

Regularization, sparse recovery, and median-of-means tournaments. (English) Zbl 07066250
Summary: We introduce a regularized risk minimization procedure for regression function estimation. The procedure is based on median-of-means tournaments, introduced by the authors in Lugosi and Mendelson (2018) and achieves near optimal accuracy and confidence under general conditions, including heavy-tailed predictor and response variables. It outperforms standard regularized empirical risk minimization procedures such as LASSO or SLOPE in heavy-tailed problems.
Reviewer: Reviewer (Berlin)

MSC:
60 Probability theory and stochastic processes
62 Statistics
PDF BibTeX XML Cite
Full Text: DOI Euclid arXiv
References:
[1] Audibert, J.-Y. and Catoni, O. (2011). Robust linear least squares regression. Ann. Statist.39 2766–2794. · Zbl 1231.62126
[2] Bellec, P., Lecué, G. and Tsybakov, A. (2016). Slope meets lasso: Improved oracle bounds and optimality. Preprint. Available at arXiv:1605.08651.
[3] Brownlees, C., Joly, E. and Lugosi, G. (2015). Empirical risk minimization for heavy-tailed losses. Ann. Statist.43 2507–2536. · Zbl 1326.62066
[4] Goldenshluger, A. and Nemirovski, A. (1997). On spatially adaptive estimation of nonparametric regression. Math. Methods Statist.6 135–170. · Zbl 0892.62018
[5] Hsu, D. and Sabato, S. (2013). Approximate loss minimization with heavy tails. Computing Research Repository. abs/1307.1827. · Zbl 1360.62380
[6] Lecué, G. and Lerasle, M. (2017). Learning from MOM’s principles. Preprint. Available at arXiv:1701.01961.
[7] Lecué, G. and Mendelson, S. (2017). Sparse recovery under weak moment assumptions. J. Eur. Math. Soc. (JEMS) 19 881–904. · Zbl 1414.62135
[8] Lecué, G. and Mendelson, S. (2018). Learning subgaussian classes: Upper and minimax bounds. In Topics in Learning Theory (S. Boucheron and N. Vayatis, eds.) Societe Mathematique de France. To appear.
[9] Lecué, G. and Mendelson, S. (2018). Regularization and the small-ball method I: Sparse recovery. Ann. Statist.46 611–641. · Zbl 1403.60085
[10] Lugosi, G. and Mendelson, S. (2018). Risk minimization by median-of-means tournaments. J. Eur. Math. Soc. (JEMS). To appear.
[11] Lugosi, G. and Mendelson, S. (2018). Sub-Gaussian estimators of the mean of a random vector. Ann. Statist. To appear. · Zbl 1417.62192
[12] Mendelson, S. (2015). Learning without concentration. J. ACM62 Art. 21, 25. · Zbl 1333.68232
[13] Mendelson, S. (2017). “Local” vs. “global” parameters – breaking the Gaussian complexity barrier. Ann. Statist.45 1835–1862. · Zbl 1459.62054
[14] Mendelson, S. (2017). On aggregation for heavy-tailed classes. Probab. Theory Related Fields168 641–674. · Zbl 1371.62032
[15] Mendelson, S. (2017). On multiplier processes under weak moment assumptions. In Geometric Aspects of Functional Analysis. Lecture Notes in Math.2169 301–318. Springer, Cham. · Zbl 1366.60044
[16] Mendelson, S. (2017). An optimal unrestricted learning procedure. Preprint. Available at arXiv:1707.05342v2.
[17] Minsker, S. (2015). Geometric median and robust estimation in Banach spaces. Bernoulli21 2308–2335. · Zbl 1348.60041
[18] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B58 267–288. · Zbl 0850.62538
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.