##
**Stable signal recovery from incomplete and inaccurate measurements.**
*(English)*
Zbl 1098.94009

Summary: Suppose we wish to recover a vector \(x_0\in\mathbb R^m\) (e.g., a digital signal or image) from incomplete and contaminated observations \(y = A x_0 + e\); \(A\) is an \(n\times m\) matrix with far fewer rows than columns \((n\ll m)\) and \(e\) is an error term. Is it possible to recover \(x_0\) accurately based on the data y?

To recover \(x_0\), we consider the solution \(x^{\#}\) to the 1-regularization problem

\[ \min\|x\|_{\ell_1}\text{ subject to }\|Ax-y\|_{\ell_2}\leq \varepsilon, \]

where \(\varepsilon\) is the size of the error term \(e\). We show that if \(A\) obeys a uniform uncertainty principle (with unit-normed columns) and if the vector \(x_0\) is sufficiently sparse, then the solution is within the noise level

\[ \|x^{\#}-x_0\|_{\ell_2}\leq C\cdot \varepsilon. \]

As a first example, suppose that \(A\) is a Gaussian random matrix; then stable recovery occurs for almost all such \(A\)’s provided that the number of nonzeros of \(x_0\) is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of \(x_0\); then stable recovery occurs for almost any set of coefficients provided that the number of nonzeros is of the order of \(n/(\log m)^6\). In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.

To recover \(x_0\), we consider the solution \(x^{\#}\) to the 1-regularization problem

\[ \min\|x\|_{\ell_1}\text{ subject to }\|Ax-y\|_{\ell_2}\leq \varepsilon, \]

where \(\varepsilon\) is the size of the error term \(e\). We show that if \(A\) obeys a uniform uncertainty principle (with unit-normed columns) and if the vector \(x_0\) is sufficiently sparse, then the solution is within the noise level

\[ \|x^{\#}-x_0\|_{\ell_2}\leq C\cdot \varepsilon. \]

As a first example, suppose that \(A\) is a Gaussian random matrix; then stable recovery occurs for almost all such \(A\)’s provided that the number of nonzeros of \(x_0\) is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of \(x_0\); then stable recovery occurs for almost any set of coefficients provided that the number of nonzeros is of the order of \(n/(\log m)^6\). In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.

### MSC:

94A12 | Signal theory (characterization, reconstruction, filtering, etc.) |

94A34 | Rate-distortion theory in information and communication theory |

62B10 | Statistical aspects of information-theoretic topics |

65K10 | Numerical optimization and variational techniques |

94A08 | Image processing (compression, reconstruction, etc.) in information and communication theory |

### Keywords:

approximately sparse signals
PDF
BibTeX
XML
Cite

\textit{E. J. Candès} et al., Commun. Pure Appl. Math. 59, No. 8, 1207--1223 (2006; Zbl 1098.94009)

### References:

[1] | ; Convex optimization. Cambridge University Press, Cambridge, 2004. · Zbl 1058.90049 |

[2] | Candès, Found Comput Math |

[3] | Candès, IEEE Trans Inform Theory 52 pp 489– (2006) |

[4] | Candès, IEEE Trans Inform Theory 51 pp 4203– (2005) |

[5] | Candès, IEEE Trans Inform Theory |

[6] | Chen, SIAM J Sci Comput 20 pp 33– (1998) |

[7] | SIAM Rev 43 pp 129– (2001) |

[8] | DeVore, IEEE Trans Inform Theory 38 pp 719– (1992) |

[9] | Donoho, IEEE Trans Inform Theory |

[10] | Donoho, Comm Pure Appl Math |

[11] | Donoho, Comm Pure Appl Math |

[12] | Donoho, Proc Natl Acad Sci USA 100 pp 2197– (2003) |

[13] | Donoho, IEEE Trans Inform Theory 52 pp 6– (2006) |

[14] | Donoho, IEEE Trans Inform Theory 47 pp 2845– (2001) |

[15] | ; ; Improved time bounds for near-optimal sparse Fourier representations. DIMACS Technical Report, 2004-49, October 2004. |

[16] | Goldfarb, Second-order cone programming based methods for total variation image restoration. Technical report, Columbia University, 2004 |

[17] | Rudin, Phys D 60 pp 259– (1992) |

[18] | Szarek, J Complexity 7 pp 131– (1991) |

[19] | Tropp, IEEE Trans Inform Theory |

[20] | Zou, J Computational Phys |

This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.