October 2004; Revised March 2006 | Emmanuel Candes† and Terence Tao‡
This paper addresses the problem of recovering a signal \( f \) from a limited number of linear measurements. Specifically, it investigates how many linear measurements are needed to reconstruct \( f \) within a precision \( \epsilon \) in the Euclidean metric. The authors show that if the signal \( f \) is sparse in a fixed basis or compressible, it can be reconstructed with high accuracy from a small number of random measurements by solving a simple linear program. More precisely, if the \( n \)-th largest entry of \( |f| \) (or its coefficients in a fixed basis) satisfies \( |f|_{(n)} \leq R \cdot n^{-1/p} \), where \( R > 0 \) and \( p > 0 \), and if measurements \( y_k = \langle f, X_k \rangle \) are taken from Gaussian vectors \( X_k \) with independent standard normal entries, then with high probability, the reconstruction \( f^\sharp \), defined as the solution to the constraints \( y_k = \langle f^\sharp, X_k \rangle \) with minimal \( \ell_1 \) norm, satisfies:
\[
\|f - f^\sharp\|_{\ell_2} \leq C_p \cdot R \cdot (K/\log N)^{-r}, \quad r = 1/p - 1/2.
\]
The result is nearly optimal in the sense that it is generally impossible to achieve a higher accuracy with any set of \( K \) measurements. The methodology extends to various other random measurement ensembles, such as Fourier coefficients. The paper also discusses the uniqueness of the minimizer for the Gaussian ensemble and provides a detailed proof of the main theorem.This paper addresses the problem of recovering a signal \( f \) from a limited number of linear measurements. Specifically, it investigates how many linear measurements are needed to reconstruct \( f \) within a precision \( \epsilon \) in the Euclidean metric. The authors show that if the signal \( f \) is sparse in a fixed basis or compressible, it can be reconstructed with high accuracy from a small number of random measurements by solving a simple linear program. More precisely, if the \( n \)-th largest entry of \( |f| \) (or its coefficients in a fixed basis) satisfies \( |f|_{(n)} \leq R \cdot n^{-1/p} \), where \( R > 0 \) and \( p > 0 \), and if measurements \( y_k = \langle f, X_k \rangle \) are taken from Gaussian vectors \( X_k \) with independent standard normal entries, then with high probability, the reconstruction \( f^\sharp \), defined as the solution to the constraints \( y_k = \langle f^\sharp, X_k \rangle \) with minimal \( \ell_1 \) norm, satisfies:
\[
\|f - f^\sharp\|_{\ell_2} \leq C_p \cdot R \cdot (K/\log N)^{-r}, \quad r = 1/p - 1/2.
\]
The result is nearly optimal in the sense that it is generally impossible to achieve a higher accuracy with any set of \( K \) measurements. The methodology extends to various other random measurement ensembles, such as Fourier coefficients. The paper also discusses the uniqueness of the minimizer for the Gaussian ensemble and provides a detailed proof of the main theorem.