zbMATH — the first resource for mathematics

Examples
Geometry Search for the term Geometry in any field. Queries are case-independent.
Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact.
"Topological group" Phrases (multi-words) should be set in "straight quotation marks".
au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted.
Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff.
"Quasi* map*" py: 1989 The resulting documents have publication year 1989.
so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14.
"Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic.
dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles.
py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses).
la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.

Operators
a & b logic and
a | b logic or
!ab logic not
abc* right wildcard
"ab c" phrase
(ab c) parentheses
Fields
any anywhere an internal document identifier
au author, editor ai internal author identifier
ti title la language
so source ab review, abstract
py publication year rv reviewer
cc MSC code ut uncontrolled term
dt document type (j: journal article; b: book; a: book article)
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. (English) Zbl 1163.94003
Summary: Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For compressible signals, the running time is just $O(N\log ^{2}N)$, where $N$ is the length of the signal.

MSC:
94A12Signal theory (characterization, reconstruction, filtering, etc.)
Software:
CoSaMP
WorldCat.org
Full Text: DOI arXiv
References:
[1] Björck, å.: Numerical methods for least squares problems, (1996) · Zbl 0847.65023
[2] Candès, E. J.: The restricted isometry property and its implications for compressed sensing, C. R. Math. acad. Sci. Paris, ser. I 346, 589-592 (2008) · Zbl 1153.94002 · doi:10.1016/j.crma.2008.03.014
[3] Candès, E.; Romberg, J.; Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete Fourier information, IEEE trans. Inform. theory 52, No. 2, 489-509 (2006) · Zbl 1231.94017 · doi:10.1109/TIT.2005.862083
[4] Candès, E.; Romberg, J.; Tao, T.: Stable signal recovery from incomplete and inaccurate measurements, Comm. pure appl. Math. 59, No. 8, 1207-1223 (2006) · Zbl 1098.94009 · doi:10.1002/cpa.20124
[5] Candès, E. J.; Tao, T.: Decoding by linear programming, IEEE trans. Inform. theory 51, No. 12, 4203-4215 (2005) · Zbl 1264.94121
[6] Candès, E. J.; Tao, T.: Near optimal signal recovery from random projections: universal encoding strategies?, IEEE trans. Inform. theory 52, No. 12, 5406-5425 (2006) · Zbl 1309.94033
[7] A. Cohen, W. Dahmen, R. DeVore, Compressed sensing and best k-term approximation, IGPM Report, RWTH-Aachen, July 2006 · Zbl 1206.94008
[8] Cormen, T.; Lesierson, C.; Rivest, L.; Stein, C.: Introduction to algorithms, (2001)
[9] G. Cormode, S. Muthukrishnan, Combinatorial algorithms for compressed sensing, Technical report, DIMACS, 2005 · Zbl 1222.94016
[10] W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing: Closing the gap between performance and complexity, available at: http://www.dsp.ece.rice.edu/cs/SubspacePursuit.pdf (preprint)
[11] Daubechies, I.; Defrise, M.; Mol, C. D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. pure appl. Math. 57, 1413-1457 (2004) · Zbl 1077.65055 · doi:10.1002/cpa.20042
[12] Donoho, D. L.: Compressed sensing, IEEE trans. Inform. theory 52, No. 4, 1289-1306 (2006) · Zbl 1288.94016
[13] D.L. Donoho, J. Tanner, Counting faces of randomly projected polytopes when the projection radically lowers dimension, Department of Statistics Technical Report 2006-11, Stanford University, 2006 · Zbl 1206.52010
[14] D.L. Donoho, Y. Tsaig, I. Drori, J.-L. Starck, Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit (StOMP), submitted for publication
[15] Figueiredo, M. A. T.; Nowak, R. D.; Wright, S. J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems, IEEE J. Select. top. Signal process.: special issue on convex optimization methods for signal processing 1, No. 4, 586-598 (2007)
[16] Garnaev, A.; Gluskin, E.: On widths of the Euclidean ball, Sov. math. Dokl. 30, 200-204 (1984) · Zbl 0588.41022
[17] A.C. Gilbert, S. Guha, P. Indyk, Y. Kotidis, S. Muthukrishnan, M.J. Strauss, Fast, small-space algorithms for approximate histogram maintenance, in: ACM Symposium on Theoretical Computer Science, 2002 · Zbl 1192.68962
[18] A.C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M.J. Strauss, Near-optimal sparse Fourier representations via sampling, in: Proceedings of the 2002 ACM Symposium on Theory of Computing STOC, 2002 · Zbl 1192.94078
[19] A.C. Gilbert, M. Muthukrishnan, M.J. Strauss, Approximation of functions over redundant dictionaries using coherence, in: Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms, 2003 · Zbl 1094.41500
[20] A.C. Gilbert, S. Muthukrishnan, M.J. Strauss, Improved time bounds for near-optimal sparse Fourier representation via sampling, in: Proceedings of SPIE Wavelets XI, San Diego, CA, 2005
[21] A. Gilbert, M. Strauss, J. Tropp, R. Vershynin, Algorithmic linear dimension reduction in the \ell 1 norm for sparse vectors, submitted for publication
[22] A. Gilbert, M. Strauss, J. Tropp, R. Vershynin, One sketch for all: Fast algorithms for compressed sensing, in: Proceedings of the 39th ACM Symp. Theory of Computing, San Diego, 2007 · Zbl 1232.94008
[23] P. Indyk, R. Berinde, Sparse recovery using sparse matrices, CSAIL technical report, Massachusetts Institute of Technology, 2008
[24] M. Iwen, A deterministic sub-linear time sparse Fourier algorithm via non-adaptive compressed sensing methods, in: Proc. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2008 · Zbl 1192.94047
[25] Kashin, B.: The widths of certain finite dimensional sets and classes of smooth functions, Izvestia 41, 334-351 (1977)
[26] Kim, S. -J.; Koh, K.; Lustig, M.; Boyd, S.; Gorinevsky, D.: A method for l1-regularized least squares, IEEE J. Select. top. Signal process. 1, No. 4, 606-617 (2007)
[27] Mallat, S.; Zhang, Z.: Matching pursuits with time -- frequency dictionaries, IEEE trans. Signal process. 41, No. 12, 3397-3415 (1993) · Zbl 0842.94004 · doi:10.1109/78.258082
[28] Miller, A. J.: Subset selection in regression, (2002) · Zbl 1051.62060
[29] D. Needell, J.A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, ACM Technical Report 2008-01, California Institute of Technology, Pasadena, 2008 · Zbl 1163.94003
[30] D. Needell, R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit, submitted for publication · Zbl 1183.68739
[31] D. Needell, R. Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit, Found. Comput. Math. (2008), in press · Zbl 1183.68739
[32] Nesterov, Y. E.; Nemirovski, A. S.: Interior point polynomial algorithms in convex programming, (1994) · Zbl 0824.90112
[33] H. Rauhut, On the impossibility of uniform sparse reconstruction using greedy methods, Sampl. Theory Signal Image Process. (2008), doi:10.1007/s10208-008-9031-3, in press · Zbl 1182.94026
[34] G. Reeves, M. Gastpar, Sampling bounds for sparse support recovery in the presence of noise, in: Proceedings of the IEEE International Symposium on Information Theory (ISIT 2008), Toronto, Canada, July 2008
[35] M. Rudelson, R. Vershynin, Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements, in: Proceedings of the 40th Annual Conference on Information Sciences and Systems, Princeton, 2006 · Zbl 1114.60009
[36] Temlyakov, V.: Nonlinear methods of approximation, Found. comput. Math. 3, No. 1, 33-107 (2003) · Zbl 1039.41012 · doi:10.1007/s102080010029
[37] Tropp, J. A.: Beyond Nyquist: efficient sampling of sparse, bandlimited signals
[38] Tropp, J. A.: Greed is good: algorithmic results for sparse approximation, IEEE trans. Inform. theory 50, No. 10, 2231-2242 (2004) · Zbl 1288.94019
[39] Tropp, J. A.; Gilbert, A. C.: Signal recovery from random measurements via orthogonal matching pursuit, IEEE trans. Inform. theory 53, No. 12, 4655-4666 (2007) · Zbl 1288.94022
[40] J.A. Tropp, A.C. Gilbert, S. Muthukrishnan, M.J. Strauss, Improved sparse approximation over quasi-incoherent dictionaries, in: Proc. 2003 IEEE International Conference on Image Processing, Barcelona, 2003