*(English)*Zbl 1113.15004

The author studies sparse solutions to underdetermined linear systems $y={\Phi}x$ where $y\in {\mathbb{R}}^{n}$ is a given vector and ${\Phi}$ is a given $n\times m$ matrix with $n<m\le \tau n$.

In the last years, a large body of research has focused on the use of overcomplete signal representations. If we have more atoms ${\phi}_{i}\in {\Phi}$ than observation dimensions, $m>n$, then there are many possible representations ${\Phi}x=y$ for a given ${\Phi}$ and $y$. The sparse representation problem is then to find the representation $x$ with the fewest possible non-zero components,

where ${\parallel \phantom{\rule{4pt}{0ex}}\parallel}_{0}$ is the ${l}_{0}$ norm of a vector $x$, i.e., the number of its non-zero coefficients. This is well known to be a NP-hard problem [cf. *B. K. Natarajan*, SIAM J. Comput. 24, No. 2, 227–234 (1995; Zbl 0827.68054)].

In the signal processing community, *S. S. Chen, D. L. Donoho* and *M. A. Saunders* [SIAM J. Sci. Comput. 20, No. 1, 33–61 (1999; Zbl 0919.94002)] proposed to approximate (${\text{P}}_{0}$) with the ‘relaxed’ ${l}_{1}$ problem

where ${\parallel \phantom{\rule{4pt}{0ex}}\parallel}_{1}={\sum}_{i=1}^{m}\left|{x}_{i}\right|$ is the ${l}_{1}$ norm of a vector $x$. Problem (${\text{P}}_{1}$), which they called basis pursuit (BP), can be formulated as a linear programming (LP) problem, which can be solved using well known optimization methods such as the simplex method or interior point methods [cf. *M. H. Wright*, Bull. Am. Math. Soc., New Ser. 42, No. 1, 39–56 (2005; Zbl 1114.90153)]. Surprisingly, when the answer to (${\text{P}}_{0}$) is sparse, it can be the same as the answer to (${\text{P}}_{1}$).

The equivalence breakdown point of a matrix ${\Phi}$, $\text{EBP}\left({\Phi}\right)$, is the maximal number $N$ such that, for every ${x}_{0}$ with fewer than $N$ nonzeros, the corresponding vector $y={\Phi}{x}_{0}$ generates a linear system $y={\Phi}x$ for which problems (${\text{P}}_{0}$) and (${\text{P}}_{1}$) have identical unique solutions, both equal to ${x}_{0}$.

The main result of the paper is: For each $\tau >1$, there exists a constant ${\rho}^{*}\left(\tau \right)>0$ so that for every sequence $({m}_{n}$) with ${m}_{n}\le \tau n$,

In other words, for the overwhelming majority of $n\times m$ matrices ${\Phi}$, the sparsest possible solution and the minimal ${l}_{1}$-solution of $y={\Phi}x$ coincide whenever a solution with at most ${\rho}^{*}n$ nonzeros exists.

The author uses techniques from Banach space theory, in particular quantitative extensions of the almost spherical sections theorem [cf. *A. Dvoretsky*, Proc. Int. Symp. Linear Spaces, Jerusalem 1960, 123–160 (1961; Zbl 0119.31803)], and other related tools exploiting randomness in highdimensional spaces, including properties of the minimum eigenvalue of Wishart matrices.

The paper finishes showing as two standard approaches, simple thresholding and greedy subset selection algorithms, in contrast to the ${l}_{1}$-minimization, which are poorly performed in the setting of the paper.

##### MSC:

15A06 | Linear equations (linear algebra) |

65F20 | Overdetermined systems, pseudoinverses (numerical linear algebra) |

65F50 | Sparse matrices (numerical linear algebra) |

60D05 | Geometric probability and stochastic geometry |

41A65 | Abstract approximation theory |

52A41 | Convex functions and convex programs (convex geometry) |

90C20 | Quadratic programming |