# zbMATH — the first resource for mathematics

Needles and straw in a haystack: robust confidence for possibly sparse sequences. (English) Zbl 1441.62110
Authors’ abstract: In the general signal$$+$$noise (allowing non-normal, non-independent observations) model, we construct an empirical Bayes posterior which we then use for uncertainty quantification for the unknown, possibly sparse, signal. We introduce a novel excessive bias restriction (EBR) condition, which gives rise to a new slicing of the entire space that is suitable for uncertainty quantification. Under EBR and some mild exchangeable exponential moment condition on the noise, we establish the local (oracle) optimality of the proposed confidence ball. Without EBR, we propose another confidence ball of full coverage, but its radius contains an additional $$\sigma n^{1/4}$$-term. In passing, we also get the local optimal results for estimation, posterior contraction problems, and the problem of weak recovery of sparsity structure. Adaptive minimax results (also for the estimation and posterior contraction problems) over various sparsity classes follow from our local results.

##### MSC:
 62G15 Nonparametric tolerance and confidence regions 62G20 Asymptotic properties of nonparametric inference 62G35 Nonparametric robustness 62C20 Minimax procedures in statistical decision theory
EBayesThresh
Full Text:
##### References:
 [1] Abramovich, F., Grinshtein, V. and Pensky, M. (2007). On optimality of Bayesian testimation in the normal means problem. Ann. Statist. 35 2261-2286. · Zbl 1126.62003 [2] Babenko, A. and Belitser, E. (2010). Oracle convergence rate of posterior under projection prior and Bayesian model selection. Math. Methods Statist. 19 219-245. · Zbl 1282.62125 [3] Baraud, Y. (2004). Confidence balls in Gaussian regression. Ann. Statist. 32 528-551. · Zbl 1093.62051 [4] Belitser, E. (2017). On coverage and local radial rates of credible sets. Ann. Statist. 45 1124-1151. · Zbl 1371.62044 [5] Belitser, E. and Ghosal, S. (2018). Empirical Bayes oracle uncertainty quantification for regression. Preprint. [6] Belitser, E. and Nurushev, N. (2020). Supplement to “Needles and straw in a haystack: Robust confidence for possibly sparse sequences.” https://doi.org/10.3150/19-BEJ1122SUPP. [7] Bhattacharya, A., Dunson, D.B., Pati, D. and Pillai, N.S. (2016). Sub-optimality of some continuous shrinkage priors. Stochastic Process. Appl. 126 3828-3842. · Zbl 1419.62050 [8] Birgé, L. and Massart, P. (2001). Gaussian model selection. J. Eur. Math. Soc. (JEMS) 3 203-268. · Zbl 1037.62001 [9] Bull, A.D. (2012). Honest adaptive confidence bands and self-similar functions. Electron. J. Stat. 6 1490-1516. · Zbl 1295.62049 [10] Bull, A.D. and Nickl, R. (2013). Adaptive confidence sets in $$L^2$$. Probab. Theory Related Fields 156 889-919. · Zbl 1273.62105 [11] Cai, T.T. and Low, M.G. (2004). An adaptation theory for nonparametric confidence intervals. Ann. Statist. 32 1805-1840. · Zbl 1056.62060 [12] Castillo, I., Schmidt-Hieber, J. and van der Vaart, A. (2015). Bayesian linear regression with sparse priors. Ann. Statist. 43 1986-2018. · Zbl 06502640 [13] Castillo, I. and Szabó, B. (2018). Spike and slab empirical Bayes sparse credible sets. Available at arXiv:1808.07721. [14] Castillo, I. and van der Vaart, A. (2012). Needles and straw in a haystack: Posterior concentration for possibly sparse sequences. Ann. Statist. 40 2069-2101. · Zbl 1257.62025 [15] Donoho, D.L. and Johnstone, I.M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika 81 425-455. · Zbl 0815.62019 [16] Donoho, D.L. and Johnstone, I.M. (1994). Minimax risk over $$l_p$$-balls for $$l_q$$-error. Probab. Theory Related Fields 99 277-303. · Zbl 0802.62006 [17] Donoho, D.L., Johnstone, I.M., Hoch, J.C. and Stern, A.S. (1992). Maximum entropy and the nearly black object (with discussion). J. Roy. Statist. Soc. Ser. B 54 41-81. · Zbl 0788.62103 [18] Johnstone, I. (2017). Gaussian estimation: Sequence and wavelet models. Book draft. [19] Johnstone, I.M. and Silverman, B.W. (2004). Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences. Ann. Statist. 32 1594-1649. · Zbl 1047.62008 [20] Li, K.-C. (1989). Honest confidence regions for nonparametric regression. Ann. Statist. 17 1001-1008. · Zbl 0681.62047 [21] Martin, R. and Walker, S.G. (2014). Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector. Electron. J. Stat. 8 2188-2206. · Zbl 1302.62015 [22] Nickl, R. and van de Geer, S. (2013). Confidence sets in sparse regression. Ann. Statist. 41 2852-2876. · Zbl 1288.62108 [23] Picard, D. and Tribouley, K. (2000). Adaptive confidence interval for pointwise curve estimation. Ann. Statist. 28 298-335. · Zbl 1106.62331 [24] Robins, J. and van der Vaart, A. (2006). Adaptive nonparametric confidence sets. Ann. Statist. 34 229-253. · Zbl 1091.62039 [25] Ročková, V. (2018). Bayesian estimation of sparse signals with a continuous spike-and-slab prior. Ann. Statist. 46 401-437. · Zbl 1395.62230 [26] Rousseau, J. and Szabó, B. (2016). Asymptotic frequentist coverage properties of Bayesian credible sets for sieve priors. Available at arXiv:1609.05067. [27] Rousseau, J. and Szabo, B. (2017). Asymptotic behaviour of the empirical Bayes posteriors associated to maximum marginal likelihood estimator. Ann. Statist. 45 833-865. · Zbl 1371.62048 [28] Szabó, B., van der Vaart, A.W. and van Zanten, J.H. (2015). Frequentist coverage of adaptive nonparametric Bayesian credible sets. Ann. Statist. 43 1391-1428. · Zbl 1317.62040 [29] van der Pas, S., Szabó, B. and van der Vaart, A. (2017). Uncertainty quantification for the horseshoe (with discussion). Bayesian Anal. 12 1221-1274. · Zbl 1384.62155 [30] van der Pas, S.L., Kleijn, B.J.K. and van der Vaart, A.W. (2014). The horseshoe estimator: Posterior concentration around nearly black vectors. Electron. J. Stat. 8 2585-2618. · Zbl 1309.62060
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.