Castro, Rui M. Adaptive sensing performance lower bounds for sparse signal detection and support estimation. (English) Zbl 1357.94030 Bernoulli 20, No. 4, 2217-2246 (2014). Summary: This paper gives a precise characterization of the fundamental limits of adaptive sensing for diverse estimation and testing problems concerning sparse signals. We consider in particular the setting introduced in [J. Haupt et al., “Distilled sensing: adaptive sampling for sparse detection and estimation”, IEEE Trans. Inform. Theory 57, No. 9, 6222–6235 (2011; doi:10.1109/TIT.2011.2162269)] and show necessary conditions on the minimum signal magnitude for both detection and estimation: if \(\mathbf{x}\in\mathbb{R}^{n}\) is a sparse vector with \(s\) non-zero components then it can be reliably detected in noise provided the magnitude of the non-zero components exceeds \(\sqrt{2/s}\). Furthermore, the signal support can be exactly identified provided the minimum magnitude exceeds \(\sqrt{2\log s} \). Notably there is no dependence on \(n\), the extrinsic signal dimension. These results show that the adaptive sensing methodologies proposed previously in the literature are essentially optimal, and cannot be substantially improved. In addition, these results provide further insights on the limits of adaptive compressive sensing. Cited in 2 Documents MSC: 94A12 Signal theory (characterization, reconstruction, filtering, etc.) 62L05 Sequential statistical design 62M07 Non-Markovian processes: hypothesis testing 62M09 Non-Markovian processes: estimation 94A13 Detection theory in information and communication theory Keywords:adaptive sensing; minimax lower bounds; sequential experimental design; sparsity-based models × Cite Format Result Cite Review PDF Full Text: DOI arXiv Euclid References: [1] Addario-Berry, L., Broutin, N., Devroye, L. and Lugosi, G. (2010). On combinatorial testing problems. Ann. Statist. 38 3063-3092. · Zbl 1200.62059 · doi:10.1214/10-AOS817 [2] Arias-Castro, E., Candès, E.J. and Davenport, M.A. (2013). On the fundamental limits of adaptive sensing. IEEE Trans. Inform. Theory 59 472-481. · Zbl 1364.94106 · doi:10.1109/TIT.2012.2215837 [3] Arias-Castro, E., Candès, E.J., Helgason, H. and Zeitouni, O. (2008). Searching for a trail of evidence in a maze. Ann. Statist. 36 1726-1757. · Zbl 1143.62006 · doi:10.1214/07-AOS526 [4] Balcan, N., Beygelzimer, A. and Langford, J. (2006). Agostic active learning. In 23 rd International Conference on Machine Learning 65-72. [5] Bessler, S.A. (1960). Theory and applications of the sequential design of experiments, \(k\)-actions and infinitely many experiments: Part I - Theory. Technical Report 55, Stanford Univ., Applied Mathematics and Statistics Laboratories. [6] Blanchard, G. and Geman, D. (2005). Hierarchical testing designs for pattern recognition. Ann. Statist. 33 1155-1202. · Zbl 1072.62052 · doi:10.1214/009053605000000174 [7] Butucea, C. and Ingster, Y. (2013). Detection of a sparse submatrix of a high-dimensional noisy matrix. Bernoulli 19 2652-2688. · Zbl 1457.62072 · doi:10.3150/12-BEJ470 [8] Cai, T.T., Jin, J. and Low, M.G. (2007). Estimation and confidence sets for sparse normal mixtures. Ann. Statist. 35 2421-2449. · Zbl 1360.62113 · doi:10.1214/009053607000000334 [9] Castro, R., Willett, R. and Nowak, R. (2005). Faster rates in regression via active learning. In Advances in Neural Information Processing Systems 18 179-186. [10] Castro, R.M. and Nowak, R.D. (2008). Minimax bounds for active learning. IEEE Trans. Inform. Theory 54 2339-2353. · Zbl 1330.68246 · doi:10.1109/TIT.2008.920189 [11] Chernoff, H. (1959). Sequential design of experiments. Ann. Math. Statist. 30 755-770. · Zbl 0092.36103 · doi:10.1214/aoms/1177706205 [12] Cohn, D., Ghahramani, Z. and Jordan, M. (1996). Active learning with statistical models. J. Artificial Intelligence Res. 4 129-145. · Zbl 0900.68366 [13] Dasgupta, S. (2004). Analysis of a greedy active learning strategy. In Advances in Neural Information Processing Systems 17 337-344. [14] Dasgupta, S. (2005). Coarse sample complexity bounds for active learning. In Advances in Neural Information Processing Systems 18 235-242. [15] Dasgupta, S., Kalai, A. and Monteleoni, C. (2005). Analysis of perceptron-based active learning. In Eighteenth Annual Conference on Learning Theory ( COLT ) 249-263. · Zbl 1235.68143 · doi:10.1007/11503415_17 [16] Donoho, D. and Jin, J. (2004). Higher criticism for detecting sparse heterogeneous mixtures. Ann. Statist. 32 962-994. · Zbl 1092.62051 · doi:10.1214/009053604000000265 [17] Donoho, D.L. (2006). Compressed sensing. IEEE Trans. Inform. Theory 52 1289-1306. · Zbl 1288.94016 · doi:10.1109/TIT.2006.871582 [18] El-Gamal, M.A. (1991). The role of priors in active Bayesian learning in the sequential statistical decision framework. In Maximum Entropy and Bayesian Methods ( Laramie , WY , 1990). Fund. Theories Phys. 43 (W.T. Grandy and L.H. Schich, eds.) 33-38. Dordrecht: Kluwer Academic. · doi:10.1007/978-94-011-3460-6_3 [19] Fedorov, V.V. (1972). Theory of Optimal Experiments . New York: Academic Press. · Zbl 0258.62044 [20] Freund, Y., Seung, H.S., Shamir, E. and Tishby, N. (1997). Selective sampling using the query by committee algorithm. Machine Learning 28 133-168. · Zbl 0881.68093 · doi:10.1023/A:1007330508534 [21] Hall, P. and Molchanov, I. (2003). Sequential methods for design-adaptive estimation of discontinuities in regression curves and surfaces. Ann. Statist. 31 921-941. · Zbl 1028.62069 · doi:10.1214/aos/1056562467 [22] Hanneke, S. (2011). Rates of convergence in active learning. Ann. Statist. 39 333-361. · Zbl 1274.62510 · doi:10.1214/10-AOS843 [23] Haupt, J., Baraniuk, R., Castro, R. and Nowak, R. (2012). Sequentially designed compressed sensing. In IEEE Statistical Signal Processing Workshop ( IEEE SSP ) Proceedings 401-404. Availableat . [24] Haupt, J., Castro, R.M. and Nowak, R. (2011). Distilled sensing: Adaptive sampling for sparse detection and estimation. IEEE Trans. Inform. Theory 57 6222-6235. · Zbl 1365.94177 · doi:10.1109/TIT.2011.2162269 [25] Ingster, Y.I. (1997). Some problems of hypothesis testing leading to infinitely divisible distributions. Math. Methods Statist. 6 47-69. · Zbl 0878.62005 [26] Ingster, Y.I. and Suslina, I.A. (2003). Nonparametric Goodness-of-fit Testing Under Gaussian Models. Lecture Notes in Statistics 169 . New York: Springer. · Zbl 1013.62049 [27] Kim, J.-C. and Korostelev, A. (2000). Rates of convergence for the sup-norm risk in image models under sequential designs. Statist. Probab. Lett. 46 391-399. · Zbl 0976.62079 · doi:10.1016/S0167-7152(99)00128-5 [28] Koltchinskii, V. (2010). Rademacher complexities and bounding the excess risk in active learning. J. Mach. Learn. Res. 11 2457-2485. · Zbl 1242.62088 [29] Lai, T.L. and Robbins, H. (1985). Asymptotically efficient adaptive allocation rules. Adv. in Appl. Math. 6 4-22. · Zbl 0568.62074 · doi:10.1016/0196-8858(85)90002-8 [30] Malloy, M. and Nowak, R. (2011). On the limits of sequential testing in high dimensions. In Asilomar Conference on Signals , Systems and Computers 1245-1249. Available at . [31] Malloy, M. and Nowak, R. (2011). Sequential analysis in high-dimensional multiple testing and[4] sparse recovery. In The IEEE International Symposium on Information Theory 2661-2665. Available at . [32] Meinshausen, N. and Rice, J. (2006). Estimating the proportion of false null hypotheses among a large number of independently tested hypotheses. Ann. Statist. 34 373-393. · Zbl 1091.62059 · doi:10.1214/009053605000000741 [33] Novak, E. (1996). On the power of adaption. J. Complexity 12 199-237. · Zbl 0870.65042 · doi:10.1006/jcom.1996.0015 [34] Tsybakov, A.B. (2009). Introduction to Nonparametric Estimation . New York: Springer. · Zbl 1176.62032 [35] Wald, A. (1947). Sequential Analysis . New York: Wiley. · Zbl 0029.15805 [36] Wasserman, L. (2006). All of Nonparametric Statistics. Springer Texts in Statistics . New York: Springer. · Zbl 1099.62029 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.