November 2006 | Emmanuel Candès and Justin Romberg
This paper presents a theoretical analysis of the problem of reconstructing a sparse signal from a limited number of linear measurements. The key result is that the $\ell_1$ minimization method can exactly recover a sparse signal $x^0 \in \mathbb{R}^n$ from $m$ randomly selected samples of $Ux^0$, where $U$ is an orthonormal matrix, provided that $m \geq C \cdot \mu^2(U) \cdot S \cdot \log n$, with $S$ being the number of nonzero components in $x^0$ and $\mu(U)$ being the largest entry in $U$ properly normalized. The smaller $\mu(U)$, the fewer samples needed. The result holds for "most" sparse signals supported on a fixed set $T$. Given $T$, if the signs of $x^0$ on $T$ and the observed values of $Ux^0$ are drawn at random, the signal is recovered with overwhelming probability. The result is nearly optimal, as any method succeeding with the same probability would require about the same number of samples.
The paper also discusses the implications of this result for various applications, including image compression and sparse signal recovery. It shows that the $\ell_1$ minimization method can be used to recover sparse signals from a limited number of measurements, and that the number of measurements required is proportional to the sparsity of the signal and the logarithm of the dimension of the signal space. The paper also introduces a new large-deviation inequality that can be used to sharpen results about the use of $\ell_1$ minimization for sparse signal recovery. The results are supported by numerical experiments and theoretical analysis, and the paper concludes with a discussion of the implications of the results for signal processing and compressed sensing.This paper presents a theoretical analysis of the problem of reconstructing a sparse signal from a limited number of linear measurements. The key result is that the $\ell_1$ minimization method can exactly recover a sparse signal $x^0 \in \mathbb{R}^n$ from $m$ randomly selected samples of $Ux^0$, where $U$ is an orthonormal matrix, provided that $m \geq C \cdot \mu^2(U) \cdot S \cdot \log n$, with $S$ being the number of nonzero components in $x^0$ and $\mu(U)$ being the largest entry in $U$ properly normalized. The smaller $\mu(U)$, the fewer samples needed. The result holds for "most" sparse signals supported on a fixed set $T$. Given $T$, if the signs of $x^0$ on $T$ and the observed values of $Ux^0$ are drawn at random, the signal is recovered with overwhelming probability. The result is nearly optimal, as any method succeeding with the same probability would require about the same number of samples.
The paper also discusses the implications of this result for various applications, including image compression and sparse signal recovery. It shows that the $\ell_1$ minimization method can be used to recover sparse signals from a limited number of measurements, and that the number of measurements required is proportional to the sparsity of the signal and the logarithm of the dimension of the signal space. The paper also introduces a new large-deviation inequality that can be used to sharpen results about the use of $\ell_1$ minimization for sparse signal recovery. The results are supported by numerical experiments and theoretical analysis, and the paper concludes with a discussion of the implications of the results for signal processing and compressed sensing.