July 3, 2008 | Ingrid Daubechies, Ronald DeVore, Massimo Fornasier, C. Sinan Güntürk
The paper discusses an iterative re-weighted least squares (IRLS) algorithm for recovering sparse vectors from linear measurements. The algorithm is designed to find the sparsest solution to the underdetermined system $\Phi x = y$, where $\Phi$ is an $m \times N$ matrix with $m < N$ and $y \in \mathbb{R}^m$. The key idea is to iteratively update a weight vector $w$ and solve the weighted least squares problem to find the sparsest solution. The main steps of the IRLS algorithm are as follows:
1. **Initialization**: Start with an initial weight vector $w^0 = (1, \ldots, 1)$ and a small $\epsilon_0$.
2. **Update Step**: For each iteration $n$:
- Solve the weighted least squares problem to find $x^{n+1}$.
- Update the weight vector $w^{n+1}$ using the formula $w_i^{n+1} = [(|x_i^{n+1}|^2 + \epsilon_n^2)^{-1/2}]$.
- Update $\epsilon_{n+1} = \min(\epsilon_n, \frac{r(x^{n+1})_{K+1}}{N})$, where $r(z)_{K+1}$ is the $(K+1)$-th largest absolute value of the entries of $z$.
3. **Convergence Check**: Stop the algorithm when $\epsilon_n = 0$ or when the solution is sufficiently close to the limit.
The paper proves that under the Restricted Isometry Property (RIP) of $\Phi$, the IRLS algorithm converges to a sparse solution for any $y$. If there is a sparse solution, the limit is this sparse solution. The convergence rate is exponential when the solution is sufficiently close to the limit. The paper also shows that the algorithm can recover sparse solutions using non-convex optimization with $\ell_\tau$-norms, where $\tau < 1$, and provides a superlinear and quadratic convergence rate for $\tau$ approaching zero.The paper discusses an iterative re-weighted least squares (IRLS) algorithm for recovering sparse vectors from linear measurements. The algorithm is designed to find the sparsest solution to the underdetermined system $\Phi x = y$, where $\Phi$ is an $m \times N$ matrix with $m < N$ and $y \in \mathbb{R}^m$. The key idea is to iteratively update a weight vector $w$ and solve the weighted least squares problem to find the sparsest solution. The main steps of the IRLS algorithm are as follows:
1. **Initialization**: Start with an initial weight vector $w^0 = (1, \ldots, 1)$ and a small $\epsilon_0$.
2. **Update Step**: For each iteration $n$:
- Solve the weighted least squares problem to find $x^{n+1}$.
- Update the weight vector $w^{n+1}$ using the formula $w_i^{n+1} = [(|x_i^{n+1}|^2 + \epsilon_n^2)^{-1/2}]$.
- Update $\epsilon_{n+1} = \min(\epsilon_n, \frac{r(x^{n+1})_{K+1}}{N})$, where $r(z)_{K+1}$ is the $(K+1)$-th largest absolute value of the entries of $z$.
3. **Convergence Check**: Stop the algorithm when $\epsilon_n = 0$ or when the solution is sufficiently close to the limit.
The paper proves that under the Restricted Isometry Property (RIP) of $\Phi$, the IRLS algorithm converges to a sparse solution for any $y$. If there is a sparse solution, the limit is this sparse solution. The convergence rate is exponential when the solution is sufficiently close to the limit. The paper also shows that the algorithm can recover sparse solutions using non-convex optimization with $\ell_\tau$-norms, where $\tau < 1$, and provides a superlinear and quadratic convergence rate for $\tau$ approaching zero.