×

On the conditions used to prove oracle results for the Lasso. (English) Zbl 1327.62425

Summary: Oracle inequalities and variable selection properties for the Lasso in linear models have been established under a variety of different assumptions on the design matrix. We show in this paper how the different conditions and concepts relate to each other. The restricted eigenvalue condition [P. J. Bickel et al., Ann. Stat. 37, No. 4, 1705–1732 (2009; Zbl 1173.62022)] or the slightly weaker compatibility condition [the first author, The deterministic Lasso. Zürich: Seminar für Statistik, Eidgenössische Technische Hochschule (2007)] are sufficient for oracle results. We argue that both these conditions allow for a fairly general class of design matrices. Hence, optimality of the Lasso for prediction and estimation holds for more general situations than what it appears from coherence [F. Bunea et al., Lect. Notes Comput. Sci. 4539, 530–543 (2007; Zbl 1203.62053); Electron. J. Stat. 1, 169–194 (2007; Zbl 1146.62028)] or restricted isometry [E. J. Candès and T. Tao, IEEE Trans. Inf. Theory 51, No. 12, 4203–4215 (2005; Zbl 1264.94121)] assumptions.

MSC:

62J07 Ridge regression; shrinkage estimators (Lasso)
62C05 General considerations in statistical decision theory
62G05 Nonparametric estimation

References:

[1] Bertsimas, and Tsitsiklis, (1997)., Introduction to linear optimization . Athena Scientific Belmont, MA.
[2] Bickel, Ritov, and Tsybakov, (2009). Simultaneous analysis of Lasso and Dantzig selector., Annals of Statistics 37 1705-1732. · Zbl 1173.62022 · doi:10.1214/08-AOS620
[3] Bunea, Tsybakov, and Wegkamp, (2007a). Aggregation for Gaussian regression., Annals of Statistics 35 1674. · Zbl 1209.62065 · doi:10.1214/009053606000001587
[4] Bunea, Tsybakov, and Wegkamp, (2007c). Sparsity oracle inequalities for the Lasso., Electronic Journal of Statistics 1 169-194. · Zbl 1146.62028 · doi:10.1214/07-EJS008
[5] Bunea, Tsybakov, and Wegkamp, (2007b). Sparse Density Estimation with, \ell 1 Penalties. In Learning Theory 20th Annual Conference on Learning Theory, COLT 2007, San Diego, CA, USA, June 13-15, 2007: Proceedings 530. Springer. · Zbl 1203.62053 · doi:10.1007/978-3-540-72927-3_38
[6] Cai, Wang, and Xu, (2009a). Shifting inequality and recovery of sparse signals., IEEE Transactions on Signal Processing, · Zbl 1392.94117
[7] Cai, Wang, and Xu, (2009b). Stable recovery of sparse signals and an oracle inequality., · Zbl 1366.94085
[8] Cai, Xu, and Zhang, (2009). On recovery of sparse signals via, \ell 1 minimization. IEEE Transactions on Information Theory 55 3388-3397. · Zbl 1367.94081 · doi:10.1109/TIT.2009.2021377
[9] Candès, and Plan, (2009). Near-ideal model selection by, \ell 1 minimization. Annals of Statistics 37 2145-2177. · Zbl 1173.62053 · doi:10.1214/08-AOS653
[10] Candès, and Tao, (2005). Decoding by linear programming., IEEE Transactions on Information Theory 51 4203-4215. · Zbl 1264.94121 · doi:10.1109/TIT.2005.858979
[11] Candès, and Tao, (2007). The Dantzig selector: statistical estimation when p is much larger than n., Annals of Statistics 35 2313-2351. · Zbl 1139.62019 · doi:10.1214/009053606000001523
[12] Koltchinskii, (2009a). Sparsity in penalized empirical risk minimization., Annales de l’Institut Henri Poincaré, Probabilités et Statistiques 45 7-57. · Zbl 1168.62044 · doi:10.1214/07-AIHP146
[13] Koltchinskii, (2009b). The Dantzig selector and sparsity oracle inequalities., Bernoulli 15 799-828. · Zbl 1452.62486 · doi:10.3150/09-BEJ187
[14] Lounici, (2008). Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators., Electronic Journal of Statistics 2 90-102. · Zbl 1306.62155 · doi:10.1214/08-EJS177
[15] Meinshausen, and Bühlmann, (2006). High-dimensional graphs and variable selection with the Lasso., Annals of Statistics 34 1436-1462. · Zbl 1113.62082 · doi:10.1214/009053606000000281
[16] Meinshausen, and Yu, (2009). Lasso-type recovery of sparse representations for high-dimensional data., Annals of Statistics 37 246-270. · Zbl 1155.62050 · doi:10.1214/07-AOS582
[17] Parter, (1961). Extreme eigenvalues of Toeplitz forms and applications to elliptic difference equations., Transactions of the American Mathematical Society 99 153-192. · Zbl 0099.32403 · doi:10.2307/1993449
[18] van de Geer, (2007). The deterministic Lasso. In, JSM proceedings, (see also http://stat.ethz.ch/research/research_reports/2007/140) . American Statistical Association.
[19] van de Geer, (2008). High-dimensional generalized linear models and the Lasso., Annals of Statistics 36 614-645. · Zbl 1138.62323 · doi:10.1214/009053607000000929
[20] Wainwright, (2009). Sharp thresholds for high-dimensional and noisy sparsity recovery using, \ell 1 -constrained quadratic programming (Lasso). IEEE Transactions on Information Theory 55 2183-2202. · Zbl 1367.62220 · doi:10.1109/TIT.2009.2016018
[21] Zhang, and Huang, (2008). The sparsity and bias of the Lasso selection in high-dimensional linear regression., Annals of Statistics 36 1567-1594. · Zbl 1142.62044 · doi:10.1214/07-AOS520
[22] Zhang, (2009). Some sharp performance bounds for least squares regression with L1 regularization., Annals of Statistics 37 2109-2144. · Zbl 1173.62029 · doi:10.1214/08-AOS659
[23] Zhao, and Yu, (2006). On model selection consistency of Lasso., Journal of Machine Learning Research 7 2541-2563. · Zbl 1222.62008
[24] Zou, (2006). The adaptive Lasso and its oracle properties., Journal of the American Statistical Association 101 1418-1429. · Zbl 1171.62326 · doi:10.1198/016214506000000735
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.