zbMATH — the first resource for mathematics

Regularization and the small-ball method. I: Sparse recovery. (English) Zbl 1403.60085
Let \((\Omega, \mu)\) be a probability space and set \(X\) to be distributed according to \(\mu\). \(F\) is a class of real-valued functions defined on \(\Omega\), \(Y\) is the unknown random variable that one would like to approximate using function in \(F\) and \(\lambda\) is the regularization parameter.
The authors discuss the best approximation to \(y\) and find the function \(f^*\) that minimizes in \(F\) the squared loss functional \(f\to E(f(x)-y)^2\).

60K35 Interacting random processes; statistical mechanics type models; percolation theory
62G08 Nonparametric regression and quantile regression
PDF BibTeX Cite
Full Text: DOI
[1] Artstein-Avidan, S., Giannopoulos, A. and Milman, V. D. (2015). Asymptotic Geometric Analysis. Part I. Mathematical Surveys and Monographs 202. Amer. Math. Soc., Providence, RI. · Zbl 1337.52001
[2] Bach, F. R. (2010). Structured sparsity-inducing norms through submodular functions. In Advances in Neural Information Processing Systems 23: 24 th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a Meeting Held 6-9 December 2010 118-126, Vancouver, British Columbia, Canada.
[3] Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous analysis of lasso and Dantzig selector. Ann. Statist.37 1705-1732. · Zbl 1173.62022
[4] Bogdan, M., van den Berg, E., Sabatti, C., Su, W. and Candès, E. J. (2015). SLOPE—Adaptive variable selection via convex optimization. Ann. Appl. Stat.9 1103-1140. · Zbl 1454.62212
[5] Bühlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, Heidelberg.
[6] Candès, E. J. and Plan, Y. (2011). Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Trans. Inform. Theory 57 2342-2359. · Zbl 1366.90160
[7] Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inform. Theory 57 1548-1566. · Zbl 1366.94103
[8] Koltchinskii, V. (2011). Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. Lecture Notes in Math.2033. Springer, Heidelberg. · Zbl 1223.91002
[9] Koltchinskii, V., Lounici, K. and Tsybakov, A. B. (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist.39 2302-2329. · Zbl 1231.62097
[10] Koltchinskii, V. and Mendelson, S. (2015). Bounding the smallest singular value of a random matrix without concentration. Int. Math. Res. Not. IMRN 23 12991-13008. · Zbl 1331.15027
[11] Lecué, G. and Mendelson, S. (2013). Learning subgaussian classes: Upper and minimax bounds. Technical report, CNRS, Ecole polytechnique and Technion.
[12] Lecué, G. and Mendelson, S. (2015). Regularization and the small-ball method II: Complexity-based bounds. Technical report, CNRS, ENSAE and Technion, I.I.T.
[13] Lecué, G. and Mendelson, S. (2017). Sparse recovery under weak moment assumptions. J. Eur. Math. Soc. (JEMS) 19 881-904. · Zbl 1414.62135
[14] Lecué, G. and Mendelson, S. (2018). Supplement to “Regularization and the small-ball method I: sparse recovery.” DOI:10.1214/17-AOS1562SUPP.
[15] Ledoux, M. (2001). The Concentration of Measure Phenomenon. Mathematical Surveys and Monographs 89. Amer. Math. Soc., Providence, RI.
[16] Lounici, K. (2008). Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators. Electron. J. Stat.2 90-102. · Zbl 1306.62155
[17] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist.34 1436-1462. · Zbl 1113.62082
[18] Meinshausen, N. and Yu, B. (2009). Lasso-type recovery of sparse representations for high-dimensional data. Ann. Statist.37 246-270. · Zbl 1155.62050
[19] Mendelson, S. (2013). Learning without concentration for general loss function. Technical report, Technion, I.I.T. Available at arXiv:1410.3192. · Zbl 1393.62038
[20] Mendelson, S. (2014). Learning without concentration. In Proceedings of the 27 th Annual Conference on Learning Theory COLT 14 25-39. · Zbl 1333.68232
[21] Mendelson, S. (2014). A remark on the diameter of random sections of convex bodies. In Geometric Aspects of Functional Analysis. Lecture Notes in Math.2116 395-404. Springer, Cham. · Zbl 1317.52011
[22] Mendelson, S. (2015). Learning without concentration. J. ACM 62 Art. 21, 25. · Zbl 1333.68232
[23] Mendelson, S. (2015). “Local vs. global parameters”, breaking the Gaussian compexity barrier. Technical report, Technion, I.I.T.
[24] Mendelson, S. (2015). On multiplier processes under weak moment assumptions Technical report, Technion, I.I.T.
[25] Mendelson, S. (2016). Upper bounds on product and multiplier empirical processes. Stochastic Process. Appl.126 3652-3680. · Zbl 1386.60077
[26] Negahban, S. and Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. J. Mach. Learn. Res.13 1665-1697. · Zbl 1436.62204
[27] Negahban, S. N., Ravikumar, P., Wainwright, M. J. and Yu, B. (2012). A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers. Statist. Sci.27 538-557. · Zbl 1331.62350
[28] Nickl, R. and van de Geer, S. (2013). Confidence sets in sparse regression. Ann. Statist.41 2852-2876. · Zbl 1288.62108
[29] Recht, B., Fazel, M. and Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev.52 471-501. · Zbl 1198.90321
[30] Rohde, A. and Tsybakov, A. B. (2011). Estimation of high-dimensional low-rank matrices. Ann. Statist.39 887-930. · Zbl 1215.62056
[31] Rudelson, M. and Vershynin, R. (2015). Small ball probabilities for linear images of high-dimensional distributions. Int. Math. Res. Not. IMRN 19 9594-9617. · Zbl 1330.60029
[32] Su, W. and Candès, E. (2016). SLOPE is adaptive to unknown sparsity and asymptotically minimax. Ann. Statist.44 1038-1068. · Zbl 1338.62032
[33] Talagrand, M. (2014). Upper and lower bounds for stochastic processes. In Modern Methods and Classical Problems. Ergebnisse der Mathematik und Ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics 60. Springer, Heidelberg. · Zbl 1293.60001
[34] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. Stat. Methodol.58 267-288. · Zbl 0850.62538
[35] van de Geer, S. (2014). Weakly decomposable regularization penalties and structured sparsity. Scand. J. Stat.41 72-86. · Zbl 1349.62325
[36] van de Geer, S., Bühlmann, P., Ritov, Y. and Dezeure, R. (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. Ann. Statist.42 1166-1202. · Zbl 1305.62259
[37] van de Geer, S. A. (2007). The deterministic lasso. Technical report, ETH Zürich. Available at http://www.stat.math.ethz.ch/ geer/lasso.pdf.
[38] van de Geer, S. A. (2008). High-dimensional generalized linear models and the lasso. Ann. Statist.36 614-645. · Zbl 1138.62323
[39] Watson, G. A. (1992). Characterization of the subdifferential of some matrix norms. Linear Algebra Appl.170 33-45. · Zbl 0751.15011
[40] Zhang, C.-H. and Zhang, S. S. (2014). Confidence intervals for low dimensional parameters in high dimensional linear models. J. R. Stat. Soc. Ser. B. Stat. Methodol.76 217-242. · Zbl 1411.62196
[41] Zhao, P. and Yu, B. (2006). On model selection consistency of Lasso. J. Mach. Learn. Res.7 2541-2563. · Zbl 1222.62008
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.