×

Semiparametric efficiency bounds for high-dimensional models. (English) Zbl 1420.62308

Authors’ abstract: “Asymptotic lower bounds for estimation play a fundamental role in assessing the quality of statistical procedures. In this paper, we propose a framework for obtaining semiparametric efficiency bounds for sparse high-dimensional models, where the dimension of the parameter is larger than the sample size. We adopt a semiparametric point of view: we concentrate on one-dimensional functions of a high-dimensional parameter. We follow two different approaches to reach the lower bounds: asymptotic Cramér-Rao bounds and Le Cam’s type of analysis. Both of these approaches allow us to define a class of asymptotically unbiased or “regular” estimators for which a lower bound is derived. Consequently, we show that certain estimators obtained by de-sparsifying (or de-biasing) an \(\ell_{1}\)-penalized M-estimator are asymptotically unbiased and achieve the lower bound on the variance: thus in this sense they are asymptotically efficient. The paper discusses in detail the linear regression model and the Gaussian graphical model.”
For each of them the lower bounds of the variance of any asymptotically unbiased estimator are established. It is shown that the de-sparsified estimator is an asymptotically unbiased estimator and is asymptotically efficient, that is, it reaches the derived lower bound.

MSC:

62J07 Ridge regression; shrinkage estimators (Lasso)
62H12 Estimation in multivariate analysis
62F12 Asymptotic properties of parametric estimators
62J05 Linear regression; mixed models

Software:

glasso
PDF BibTeX XML Cite
Full Text: DOI arXiv Euclid

References:

[1] Bellec, P. and Tsybakov, A. B. (2016). Bounds on the prediction error of penalized least squares estimators with convex penalty. Available at arXiv:1609.06675. · Zbl 06848045
[2] Bickel, P. J., Klaassen, C. A., Ritov, Y. and Wellner, J. A. (1993). Efficient and Adaptive Estimation for Semiparametric Models. Springer, Berlin. · Zbl 0786.62001
[3] Bühlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, Heidelberg.
[4] Cai, T. T. and Guo, Z. (2017). Confidence intervals for high-dimensional linear regression: Minimax rates and adaptivity. Ann. Statist.45 615–646. · Zbl 1371.62045
[5] Chernozhukov, V., Hansen, C. and Spindler, M. (2015). Valid post-selection and post-regularization inference: An elementary, general approach. Ann. Rev. Econ.7 649–688.
[6] Collier, O., Comminges, L. and Tsybakov, A. B. (2015). Minimax estimation of linear and quadratic functionals on sparsity classes. Available at arXiv:1502.00665. · Zbl 1368.62191
[7] Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical Lasso. Biostatistics9 432–441. · Zbl 1143.62076
[8] Gao, C., Ma, Z. and Zhou, H. H. (2017). Sparse CCA: Adaptive estimation and computational barriers. Ann. Statist.45 2074–2101. · Zbl 1421.62073
[9] Janková, J. and van de Geer, S. (2014). Confidence intervals for high-dimensional inverse covariance estimation. Electron. J. Stat.9 1205–1229. · Zbl 1328.62458
[10] Janková, J. and van de Geer, S. (2017). Honest confidence regions and optimality for high-dimensional precision matrix estimation. TEST26 143–162. · Zbl 1368.62204
[11] Janková, J. and van de Geer, S. (2018). Supplement to “Semiparametric efficiency bounds for high-dimensional models.” DOI:10.1214/17-AOS1622SUPP.
[12] Javanmard, A. and Montanari, A. (2014a). Confidence intervals and hypothesis testing for high-dimensional regression. J. Mach. Learn. Res.15 2869–2909. · Zbl 1319.62145
[13] Javanmard, A. and Montanari, A. (2014b). Hypothesis testing in high-dimensional regression under the Gaussian random design model: Asymptotic theory. IEEE Trans. Inform. Theory60 6522–6554. · Zbl 1360.62074
[14] Javanmard, A. and Montanari, A. (2015). De-biasing the Lasso: Optimal sample size for gaussian designs. Available at arxiv:1508.02757.
[15] Knight, K. and Fu, W. (2000). Asymptotics for Lasso-type estimators. Ann. Statist.28 1356–1378. · Zbl 1105.62357
[16] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the Lasso. Ann. Statist.34 1436–1462. · Zbl 1113.62082
[17] Meinshausen, N. and Yu, B. (2009). Lasso-type recovery of sparse representations for high-dimensional data. Ann. Statist.37 246–270. · Zbl 1155.62050
[18] Ren, Z., Sun, T., Zhang, C.-H. and Zhou, H. H. (2015). Asymptotic normality and optimalities in estimation of large Gaussian graphical models. Ann. Statist.43 991–1026. · Zbl 1328.62342
[19] van de Geer, S. (2016). Estimation and Testing under Sparsity. Springer, Berlin. · Zbl 1362.62006
[20] van de Geer, S., Bühlmann, P., Ritov, Y. and Dezeure, R. (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. Ann. Statist.42 1166–1202. · Zbl 1305.62259
[21] van der Vaart, A. W. (1998). Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics3. Cambridge Univ. Press, Cambridge.
[22] Zhang, C.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.