Decoding by Linear Programming

Decoding by Linear Programming

December 2004 | Emmanuel Candes† and Terence Tao‡
This paper addresses the classical error-correcting problem in coding theory, focusing on recovering an input vector \( f \in \mathbf{R}^n \) from corrupted measurements \( y = Af + e \), where \( A \) is an \( m \times n \) matrix and \( e \) is an unknown vector of errors. The authors prove that under suitable conditions on the matrix \( A \), the input \( f \) can be uniquely recovered as the solution to the \( \ell_1 \)-minimization problem: \[ \min_{g \in \mathbf{R}^n} \|y - Ag\|_{\ell_1} \] provided that the support of the error vector \( e \) is not too large, specifically, \( \|e\|_{\ell_0} \leq \rho \cdot m \) for some \( \rho > 0 \). This recovery is achieved by solving a simple convex optimization problem, which can be reformulated as a linear program. Numerical experiments suggest that this method works well even when a significant fraction of the output is corrupted. The paper introduces the concept of a "restrictedly almost orthonormal system," which allows for the exact reconstruction of sparse linear combinations of vectors. The main result is that if the matrix \( F \) has restricted isometry constants \( \delta_S \) and orthogonality constants \( \theta_{S, S'} \) satisfying \( \delta_S + \theta_S + \theta_{S, 2S} < 1 \), then the \( \ell_1 \)-minimization problem recovers \( f \) exactly. This result is deterministic and does not rely on randomization. The authors also show that Gaussian random matrices with independent standard normal entries satisfy these conditions with high probability, allowing for exact recovery of the input vector when the fraction of corrupted entries is below a certain threshold. They provide numerical bounds and discuss the implications for signal recovery from highly incomplete data, improving upon earlier work. Finally, the paper explores connections with other works, such as those involving error-correcting codes over finite fields and the use of linear programming for decoding in coding theory.This paper addresses the classical error-correcting problem in coding theory, focusing on recovering an input vector \( f \in \mathbf{R}^n \) from corrupted measurements \( y = Af + e \), where \( A \) is an \( m \times n \) matrix and \( e \) is an unknown vector of errors. The authors prove that under suitable conditions on the matrix \( A \), the input \( f \) can be uniquely recovered as the solution to the \( \ell_1 \)-minimization problem: \[ \min_{g \in \mathbf{R}^n} \|y - Ag\|_{\ell_1} \] provided that the support of the error vector \( e \) is not too large, specifically, \( \|e\|_{\ell_0} \leq \rho \cdot m \) for some \( \rho > 0 \). This recovery is achieved by solving a simple convex optimization problem, which can be reformulated as a linear program. Numerical experiments suggest that this method works well even when a significant fraction of the output is corrupted. The paper introduces the concept of a "restrictedly almost orthonormal system," which allows for the exact reconstruction of sparse linear combinations of vectors. The main result is that if the matrix \( F \) has restricted isometry constants \( \delta_S \) and orthogonality constants \( \theta_{S, S'} \) satisfying \( \delta_S + \theta_S + \theta_{S, 2S} < 1 \), then the \( \ell_1 \)-minimization problem recovers \( f \) exactly. This result is deterministic and does not rely on randomization. The authors also show that Gaussian random matrices with independent standard normal entries satisfy these conditions with high probability, allowing for exact recovery of the input vector when the fraction of corrupted entries is below a certain threshold. They provide numerical bounds and discuss the implications for signal recovery from highly incomplete data, improving upon earlier work. Finally, the paper explores connections with other works, such as those involving error-correcting codes over finite fields and the use of linear programming for decoding in coding theory.
Reach us at info@study.space