an:05051899
Zbl 1098.94009
Cand??s, Emmanuel J.; Romberg, Justin K.; Tao, Terence
Stable signal recovery from incomplete and inaccurate measurements
EN
Commun. Pure Appl. Math. 59, No. 8, 1207-1223 (2006).
00181821
2006
j
94A12 94A34 62B10 65K10 94A08
approximately sparse signals
Summary: Suppose we wish to recover a vector \(x_0\in\mathbb R^m\) (e.g., a digital signal or image) from incomplete and contaminated observations \(y = A x_0 + e\); \(A\) is an \(n\times m\) matrix with far fewer rows than columns \((n\ll m)\) and \(e\) is an error term. Is it possible to recover \(x_0\) accurately based on the data y?
To recover \(x_0\), we consider the solution \(x^{\#}\) to the 1-regularization problem
\[
\min\|x\|_{\ell_1}\text{ subject to }\|Ax-y\|_{\ell_2}\leq \varepsilon,
\]
where \(\varepsilon\) is the size of the error term \(e\). We show that if \(A\) obeys a uniform uncertainty principle (with unit-normed columns) and if the vector \(x_0\) is sufficiently sparse, then the solution is within the noise level
\[
\|x^{\#}-x_0\|_{\ell_2}\leq C\cdot \varepsilon.
\]
As a first example, suppose that \(A\) is a Gaussian random matrix; then stable recovery occurs for almost all such \(A\)'s provided that the number of nonzeros of \(x_0\) is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of \(x_0\); then stable recovery occurs for almost any set of coefficients provided that the number of nonzeros is of the order of \(n/(\log m)^6\).
In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.