March 4, 2003 | David L. Donoho** and Michael Elad*
The article by David L. Donoho and Michael Elad discusses the problem of finding the sparsest representation of a signal $\mathcal{S}$ in a general dictionary $\mathbf{D}$ using $\ell^1$ minimization. The authors extend previous work that considered the case where $\mathbf{D}$ is an overcomplete system consisting of exactly two orthobases. They show that for certain dictionaries, highly sparse solutions can be obtained by solving the $\ell^1$ minimization problem, which is a convex relaxation of the $\ell^0$ minimization problem. The key results include:
1. **Uniqueness**: If a signal $\mathcal{S}$ has a unique sparse representation in $\mathbf{D}$, it must have fewer than $\operatorname{Spark}(\mathbf{D}) / 2$ non-zero entries, where $\operatorname{Spark}(\mathbf{D})$ is the size of the smallest linearly dependent subset of columns in $\mathbf{D}$.
2. **Equivalence**: If a signal $\mathcal{S}$ admits a highly sparse representation in $\mathbf{D}$, the $\ell^1$ minimization problem will necessarily find this representation. This is true if the $\ell^1$ norm of the sparsest representation is less than $(1 + 1 / M(\mathbf{G})) / 2$, where $M(\mathbf{G})$ is the largest off-diagonal entry in the Gram matrix $\mathbf{G} = \mathbf{D}^T \mathbf{D}$.
The authors also provide bounds on $\operatorname{Spark}(\mathbf{D})$ using the quantity $\mu_1(\mathbf{G})$, which is the smallest integer $m$ such that at least one collection of $m$ off-diagonal magnitudes in $\mathbf{G}$ sums to at least 1. They show that $\operatorname{Spark}(\mathbf{D}) \geq \mu_1(\mathbf{G})$ and $\operatorname{Spark}(\mathbf{D}) \geq 1 / M(\mathbf{G})$.
The article concludes with three applications:
1. **Separating Points, Lines, and Planes**: In 3D volumetric data, the authors show that if a volume can be represented as a superposition of fewer than $\sqrt{p}$ planes, lines, and points, this is the unique sparsest representation.
2. **Robust Multiuser Private Broadcasting**: A scheme is presented where individuals encode information by superposing it in a single signal vector, which is intended for a single recipient. The encoded vector appears as random noise to anyone but the recipient, and the recipient can perfectly decode it even with some entries corrupted.
3. **Blind Identification of Sources**: In theThe article by David L. Donoho and Michael Elad discusses the problem of finding the sparsest representation of a signal $\mathcal{S}$ in a general dictionary $\mathbf{D}$ using $\ell^1$ minimization. The authors extend previous work that considered the case where $\mathbf{D}$ is an overcomplete system consisting of exactly two orthobases. They show that for certain dictionaries, highly sparse solutions can be obtained by solving the $\ell^1$ minimization problem, which is a convex relaxation of the $\ell^0$ minimization problem. The key results include:
1. **Uniqueness**: If a signal $\mathcal{S}$ has a unique sparse representation in $\mathbf{D}$, it must have fewer than $\operatorname{Spark}(\mathbf{D}) / 2$ non-zero entries, where $\operatorname{Spark}(\mathbf{D})$ is the size of the smallest linearly dependent subset of columns in $\mathbf{D}$.
2. **Equivalence**: If a signal $\mathcal{S}$ admits a highly sparse representation in $\mathbf{D}$, the $\ell^1$ minimization problem will necessarily find this representation. This is true if the $\ell^1$ norm of the sparsest representation is less than $(1 + 1 / M(\mathbf{G})) / 2$, where $M(\mathbf{G})$ is the largest off-diagonal entry in the Gram matrix $\mathbf{G} = \mathbf{D}^T \mathbf{D}$.
The authors also provide bounds on $\operatorname{Spark}(\mathbf{D})$ using the quantity $\mu_1(\mathbf{G})$, which is the smallest integer $m$ such that at least one collection of $m$ off-diagonal magnitudes in $\mathbf{G}$ sums to at least 1. They show that $\operatorname{Spark}(\mathbf{D}) \geq \mu_1(\mathbf{G})$ and $\operatorname{Spark}(\mathbf{D}) \geq 1 / M(\mathbf{G})$.
The article concludes with three applications:
1. **Separating Points, Lines, and Planes**: In 3D volumetric data, the authors show that if a volume can be represented as a superposition of fewer than $\sqrt{p}$ planes, lines, and points, this is the unique sparsest representation.
2. **Robust Multiuser Private Broadcasting**: A scheme is presented where individuals encode information by superposing it in a single signal vector, which is intended for a single recipient. The encoded vector appears as random noise to anyone but the recipient, and the recipient can perfectly decode it even with some entries corrupted.
3. **Blind Identification of Sources**: In the