Daubechies, Ingrid; DeVore, Ronald; Fornasier, Massimo; Güntürk, C. Sinan Iteratively reweighted least squares minimization for sparse recovery. (English) Zbl 1202.65046 Commun. Pure Appl. Math. 63, No. 1, 1-38 (2010). From the authors’ abstract: “Under certain conditions (known as the restricted isometry property, or RIP) on the \(m \times N\) matrix \(\Phi \) (where \(m < N\)), vectors \(x \in \mathbb R^N\) that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from \(y := \Phi x\) even though \(\Phi ^{-1}(y)\) is typically an \((N - m)\)-dimensional hyperplane; in addition, \(x\) is then equal to the element in \(\Phi ^{-1}(y)\) of minimal \(\ell _1\)-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining \(x\), as the limit of an iteratively reweighted least squares algorithm. We prove that when \(\Phi \) satisfies the RIP conditions, the sequence \(x^{(n)}\) converges for all \(y\), regardless of whether \(\Phi ^{-1}(y)\) contains a sparse vector.” Reviewer: Murli Gupta (Washington, D. C.) Cited in 2 ReviewsCited in 186 Documents MSC: 65F20 Numerical solutions to overdetermined systems, pseudoinverses 94A08 Image processing (compression, reconstruction, etc.) in information and communication theory 42C40 Nontrigonometric harmonic analysis involving wavelets and other special systems 65K10 Numerical optimization and variational techniques 90C30 Nonlinear programming Keywords:least squares; minimization; sparse recovery; restricted isometry property; linear programming algorithms × Cite Format Result Cite Review PDF Full Text: DOI arXiv References: [1] Baraniuk, Compressive sensing [Lecture notes], IEEE Signal Processing Magazine 24 (4) pp 118– (2007) [2] Baraniuk, A simple proof of the restricted isometry property for random matrices (aka ”The Johnson-Lindenstrauss lemma meets compressed sensing”), Constr Approx 28 (3) pp 253– (2008) [3] Byrd, R.; Payne, D. Convergence of the IRLS algorithm for robust regression. Technical Report 313, John Hopkins University, 1979. [4] Candès, International Congress of Mathematicians. Vol. III pp 1433– (2006) [5] Candès, E. J. Lecture notes of the IMA course on Compressive Sampling and Frontiers in Signal Processing, June 4-15, 2007. [6] Available at: http://www.ima.umn.edu/2006-2007/ND6.4-15.07/abstracts.html. [7] Candès, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans Inform Theory 52 (2) pp 489– (2006) · Zbl 1231.94017 [8] Candès, Stable signal recovery from incomplete and inaccurate measurements, Comm Pure Appl Math 59 (8) pp 1207– (2006) · Zbl 1098.94009 [9] Candès, Decoding by linear programming, IEEE Trans Inform Theory 51 (12) pp 4203– (2005) [10] Candès, Near-optimal signal recovery from random projections: universal encoding strategies, IEEE Trans Inform Theory 52 (12) pp 5406– (2006) [11] Candès, Enhancing sparsity by reweighted l1 minimization, J Fourier Anal Appl 14 (5) pp 877– (2008) · Zbl 1176.94014 [12] Chambolle, Image recovery via total variation minimization and related problems, Numer Math 76 (2) pp 167– (1997) · Zbl 0874.68299 [13] Chartrand, Exact reconstructions of sparse signals via nonconvex minimization, IEEE Signal Process Lett 14 (10) pp 707– (2007) [14] Chartrand, Nonconvex compressive sensing and reconstruction of gradient-sparse images: random vs. tomographic Fourier sampling (2008) · doi:10.1109/ICIP.2008.4712332 [15] Available at: http://math.lanl.gov/rick/Publications/chartrand-2008-nonconvex.shtml. [16] Chartrand, Restricted isometry properties and nonconvex compressive sensing, Inverse Problems 24 (035020) pp 1– (2008) · Zbl 1143.94004 [17] Chartrand, Iteratively reweighted algorithms for compressive sensing (2008) · doi:10.1109/ICASSP.2008.4518498 [18] Available at: http://math.lanl.gov/rick/Publications/chartrand-2008-iteratively.shtml. [19] Chen, Atomic decomposition by basis pursuit, SIAM J Sci Comput 20 (1) pp 33– (1998) [20] Claerbout, Robust modeling with erratic data, Geophysics 38 (5) pp 826– (1973) [21] Cline, Rate of convergence of Lawson’s algorithm, Math Comp 26 pp 167– (1972) · Zbl 0284.41004 [22] Cohen, Compressed sensing and best k-term approximation, J Amer Math Soc 22 (1) pp 211– (2009) · Zbl 1206.94008 [23] DeVore, Nonlinear approximation, Acta Numer 7 pp 51– (1998) [24] Donoho, Compressed sensing, IEEE Trans Inform Theory 52 (4) pp 1289– (2006) · Zbl 1288.94016 [25] Donoho, High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension, Discrete Comput Geom 35 (4) pp 617– (2006) · Zbl 1095.52500 [26] Donoho, Signal recovery and the large sieve, SIAM J Appl Math 52 (2) pp 577– (1992) · Zbl 0768.42001 [27] Donoho, Uncertainty principles and signal recovery, SIAM J Appl Math 49 (3) pp 906– (1989) · Zbl 0689.42001 [28] Donoho, Sparse nonnegative solution of underdetermined linear equations by linear programming, Proc Natl Acad Sci USA 102 (27) pp 9446– (2005) [29] Donoho, Counting faces of randomly projected polytopes when the projection radically lowers dimension, J Amer Math Soc 22 (1) pp 1– (2009) · Zbl 1206.52010 [30] Donoho, Fast solution of 1-norm minimization problems when the solution may be sparse, IEEE Trans Inform Theory 54 (11) pp 4789– (2008) · Zbl 1247.94009 [31] Figueiredo, Majorization-minimization algorithms for wavelet-based image restoration, IEEE Trans Image Process 16 (12) pp 2980– (2007) [32] Fornasier, Restoration of color images by vector valued BV functions and variational calculus, SIAM J Appl Math 68 (2) pp 437– (2007) · Zbl 1147.68796 [33] Gorodnitsky, Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm, IEEE Trans Signal Process 45 (3) pp 600– (1997) [34] Gribonval, Highly sparse representations from dictionaries are unique and independent of the sparseness measure, Appl Comput Harmon Anal 22 (3) pp 335– (2007) · Zbl 1133.94011 [35] Lawson, C. L. Contributions to the theory of linear least maximum approximation. Ph.D. thesis, University of California, Los Angeles, 1961. [36] Li, A globally convergent method for lp problems, SIAM J Optim 3 (3) pp 609– (1993) [37] O’Brien, Recovery of a sparse spike time series by 1 norm deconvolution, IEEE Trans Signal Process 42 (12) pp 3353– (1994) [38] Osborne, Finite algorithms in optimization and data analysis (1985) · Zbl 0573.65044 [39] Pinkus, On L1-approximation 93 (1989) [40] Rudelson, On sparse reconstruction from Fourier and Gaussian measurements, Comm Pure Appl Math 61 (8) pp 1025– (2008) · Zbl 1149.94010 [41] Saab, Proceedings of the 33rd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) pp 3885– (2008) [42] Santosa, Linear inversion of band-limited reflection seismograms, SIAM J Sci Statist Comput 7 (4) pp 1307– (1986) · Zbl 0602.73113 [43] Taylor, Deconvolution with the 1 norm, Geophysics 44 (1) pp 39– (1979) [44] Tibshirani, Regression shrinkage and selection via the lasso, J Roy Statist Soc Ser B 58 (1) pp 267– (1996) · Zbl 0850.62538 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.