×

Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models. (English) Zbl 1282.62112

Summary: We would like to take this opportunity to thank the discussants for their thoughtful comments and encouragements [P. Bühlmann and L. Meier, ibid. 36, No. 4, 1534–1541 (2008; Zbl 1282.62096); X.-L. Meng, ibid. 36, No. 4, 1542–1552 (2008; Zbl 1282.62104); C.-H. Zhang, ibid. 36, No. 4, 1553–1560 (2008; Zbl 1282.62110)] on our work [ibid. 36, No. 4, 1509–1533 (2008; Zbl 1142.62027)]. The discussants raised a number of issues from theoretical as well as computational perspectives. Our rejoinder will try to provide some insights into these issues and address specific questions asked by the discussants.

MSC:

62G08 Nonparametric regression and quantile regression
65C60 Computational problems in statistics (MSC2010)
62J05 Linear regression; mixed models
62J07 Ridge regression; shrinkage estimators (Lasso)
65C05 Monte Carlo methods
62G20 Asymptotic properties of nonparametric inference

Software:

ElemStatLearn

References:

[1] Barbieri, M. and Berger, J. (2004). Optimal predictive model selection. Ann. Statist. 32 870-897. · Zbl 1092.62033 · doi:10.1214/00905360400000023
[2] Donoho, D. L. and Elad, E. (2003). Maximal sparsity representation via l 1 minimization. Proc. Natl. Acad. Sci. 100 2197-2202. · Zbl 1064.94011
[3] Donoho, D. L. and Huo, X. (2001). Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform. Theory 47 2845-2862. · Zbl 1019.94503 · doi:10.1109/18.959265
[4] Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least angle regression. Ann. Statist. 32 407-499. · Zbl 1091.62054 · doi:10.1214/009053604000000067
[5] Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348-1360. JSTOR: · Zbl 1073.62547 · doi:10.1198/016214501753382273
[6] Fan, J. and Li, R. (2004). New estimation and model selection procedures for semiparametric modeling in longitudinal data analysis. J. Amer. Statist. Assoc. 99 710-723. · Zbl 1117.62329 · doi:10.1198/016214504000001060
[7] Fan, J. and Li, R. (2006). Statistical challenges with high dimensionality: Feature selection in knowledge discovery. In Proceedings of the Madrid International Congress of Mathematicians 3 (M. Sanz-Sole, J. Soria, J. L. Varona and J. Verdera, eds.) 595-622. EMS, Zürich. · Zbl 1117.62137
[8] Fan, J. and Peng, H. (2004). On non-concave penalized likelihood with diverging number of parameters. Ann. Statist. 32 928-961. · Zbl 1092.62031 · doi:10.1214/009053604000000256
[9] Hastie, T., Tibshirani, R. and Friedman, J. (2001). The Elements of Statistical Learning ; Data mining , Inference and Prediction . Springer, New York. · Zbl 0973.62007
[10] Li, R. and Liang, H. (2008). Variable selection in semiparametric regression modeling. Ann. Statist. 36 261-286. · Zbl 1132.62027 · doi:10.1214/009053607000000604
[11] Madigan, D. and Greg, R. (2004). Discussion of “Least angle regression,” by B. Efron, T. Hastie and I. Johnstone. Ann. Statist. 32 465-469. · Zbl 1091.62054 · doi:10.1214/009053604000000067
[12] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436-1462. · Zbl 1113.62082 · doi:10.1214/009053606000000281
[13] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267-288. JSTOR: · Zbl 0850.62538
[14] Zhang, T. and Yu, B. (2005). Boosting with early stopping: Convergence and consistency. Ann. Statist. 33 1538-1579. · Zbl 1078.62038 · doi:10.1214/009053605000000255
[15] Zhao, P. and Yu, B. (2006). On model selection consistency of lasso. J. Mach. Learn. Res. 7 2541-2563. · Zbl 1222.62008
[16] Zou, H. (2006). The adaptive lasso and its oracle properties. J. Amer. Statist. Assoc. 101 1418-1429. · Zbl 1171.62326 · doi:10.1198/016214506000000735
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.