Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ¹ minimization

Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ¹ minimization

March 4, 2003 | David L. Donoho and Michael Elad
This paper presents a method for finding the sparsest representation of a signal in a general (nonorthogonal) dictionary using $ \ell^{1} $ minimization. The goal is to represent a signal $ S $ as a linear combination of dictionary elements $ d_k $, with the aim of minimizing the number of non-zero coefficients $ \gamma(k) $. This is a combinatorial optimization problem, but under certain conditions, it can be approximated by solving a convex optimization problem, specifically minimizing the $ \ell^{1} $ norm of the coefficients. The paper discusses the use of $ \ell^{1} $ minimization in various applications, including separating linear and planar features in 3D data, robust multiuser private broadcasting, and blind identification of over-complete independent component models. It also provides theoretical results showing that under certain conditions, the solution to the $ \ell^{1} $ minimization problem is equivalent to the solution of the original $ \ell^{0} $ minimization problem, which seeks the sparsest representation. The paper introduces the concept of the Spark of a matrix, which is the size of the smallest linearly dependent subset of columns. It shows that if a signal is represented using fewer than half of the Spark of the dictionary, then the representation is unique and the $ \ell^{1} $ minimization problem will yield the correct sparsest representation. The paper also provides bounds on the Spark based on the properties of the Gram matrix of the dictionary. The paper concludes that $ \ell^{1} $ minimization can be used to find the sparsest representation of a signal in a general dictionary under certain conditions, and that this method is computationally efficient and can be applied to a wide range of dictionaries. The results are validated through various applications and theoretical analysis.This paper presents a method for finding the sparsest representation of a signal in a general (nonorthogonal) dictionary using $ \ell^{1} $ minimization. The goal is to represent a signal $ S $ as a linear combination of dictionary elements $ d_k $, with the aim of minimizing the number of non-zero coefficients $ \gamma(k) $. This is a combinatorial optimization problem, but under certain conditions, it can be approximated by solving a convex optimization problem, specifically minimizing the $ \ell^{1} $ norm of the coefficients. The paper discusses the use of $ \ell^{1} $ minimization in various applications, including separating linear and planar features in 3D data, robust multiuser private broadcasting, and blind identification of over-complete independent component models. It also provides theoretical results showing that under certain conditions, the solution to the $ \ell^{1} $ minimization problem is equivalent to the solution of the original $ \ell^{0} $ minimization problem, which seeks the sparsest representation. The paper introduces the concept of the Spark of a matrix, which is the size of the smallest linearly dependent subset of columns. It shows that if a signal is represented using fewer than half of the Spark of the dictionary, then the representation is unique and the $ \ell^{1} $ minimization problem will yield the correct sparsest representation. The paper also provides bounds on the Spark based on the properties of the Gram matrix of the dictionary. The paper concludes that $ \ell^{1} $ minimization can be used to find the sparsest representation of a signal in a general dictionary under certain conditions, and that this method is computationally efficient and can be applied to a wide range of dictionaries. The results are validated through various applications and theoretical analysis.
Reach us at info@study.space