×

Improved bounds for square-root Lasso and square-root slope. (English) Zbl 1473.62132

Summary: Extending the results of P. C. Bellec et al. [“Slope meets Lasso: improved oracle bounds and optimality”, Preprint, arXiv:1605.08651] to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is \((s/n)\log\left (p/s\right )\), up to some constant, under some mild conditions on the design matrix. Here, \(n\) is the sample size, \(p\) is the dimension and \(s\) is the sparsity parameter. We also prove optimality for the estimation error in the \(l_{q}\)-norm, with \(q\in[1,2]\) for the Square-Root Lasso, and in the \(l_{2}\) and sorted \(l_{1}\) norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity \(s\) of the true parameter. Next, we prove that any estimator depending on \(s\) which attains the minimax rate admits an adaptive to \(s\) version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [loc. cit.] where the case of known variance is treated. Our results are non-asymptotic.

MSC:

62G08 Nonparametric regression and quantile regression
62C20 Minimax procedures in statistical decision theory
62G05 Nonparametric estimation
62J05 Linear regression; mixed models
62J07 Ridge regression; shrinkage estimators (Lasso)
PDF BibTeX XML Cite
Full Text: DOI arXiv Euclid

References:

[1] Bellec, P. C., Lecué, G. and Tsybakov, A. B. (2017). Slope meets Lasso: improved oracle bounds and optimality., ArXiv preprint, arXiv:1605.08651v3.
[2] Bellec, P. C., Lecué, G. and Tsybakov, A. B. (2017). Towards the study of least squares estimators with convex penalty., Séminaires et Congrès, accepted, to appear. · Zbl 06848045
[3] Bellec, P. C. and Tsybakov, A. B. (2017). Bounds on the prediction error of penalized least squares estimators with convex penalty. In, Modern Problems of Stochastic Analysis and Statistics, Selected Contributions In Honor of Valentin Konakov (V. Panov, ed.) Springer. · Zbl 06848045
[4] Belloni, A., Chernozhukov, V. and Wang, L. (2011). Square-root lasso: pivotal recovery of sparse signals via conic programming., Biometrika 98 791-806. · Zbl 1228.62083
[5] Belloni, A., Chernozhukov, V. and Wang, L. (2014). Pivotal estimation via square-root lasso in nonparametric regression., Annals of Statistics 42 757-788. · Zbl 1321.62030
[6] Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous analysis of Lasso and Dantzig selector., Annals of Statistics 37 1705-1732. · Zbl 1173.62022
[7] Bogdan, M., van den Berg, E., Sabatti, C., Su, W. and Candès, E. J. (2015). SLOPE - adaptive variable selection via convex optimization., Annals of Applied Statistics 9 1103. · Zbl 1454.62212
[8] Giraud, C. (2014)., Introduction to high-dimensional statistics 138. CRC Press.
[9] Lecué, G. and Mendelson, S. (2017). Regularization and the small-ball method I: sparse recovery., Annals of Statistics, to appear.
[10] Owen, A. B. (2007). A robust hybrid of lasso and ridge regression., Contemporary Mathematics 443 59-72. · Zbl 1134.62047
[11] Stucky, B. and van de Geer, S. (2017). Sharp Oracle inequalities for square root regularization., Journal of Machine Learning Research 18 1-29. · Zbl 1441.62188
[12] Su, W. and Candes, E. (2016). SLOPE is adaptive to unknown sparsity and asymptotically minimax., Annals of Statistics 44 1038-1068. · Zbl 1338.62032
[13] Sun, T. and Zhang, C.-H. (2012). Scaled sparse linear regression., Biometrika 1-20. · Zbl 1452.62515
[14] Zeng, X. and Figueiredo, M. A. T. (2014). The Ordered Weighted \(ℓ _1\) Norm: Atomic Formulation, Projections, and Algorithms.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.