The chapter introduces compressive sampling (CS) and its recovery via convex programming, focusing on image compression and acquisition. It begins by highlighting the importance of image compression in modern applications, where high-resolution images are feasible due to compression techniques. The central concept is to transform an image into a suitable basis and code only the important expansion coefficients, a process that has been extensively studied, particularly with wavelet transforms.
The chapter then generalizes the notion of sampling by defining measurements as inner products of the image with different test functions, rather than point evaluations or averages. This approach allows for the acquisition of information in various domains, such as Fourier coefficients, line integrals, or pixel values. The choice of test functions is crucial for minimizing the number of measurements needed to reconstruct the image faithfully.
The effectiveness of CS is demonstrated through a numerical experiment using the "Camera Man" image. Two strategies are compared: linear imaging, which measures low-pass DCT coefficients, and compressive imaging, which uses pseudorandom measurements and a nonlinear reconstruction procedure. The results show that compressive imaging provides better and faster reconstruction, especially around image edges.
The chapter also discusses uncertainty principles, which relate the sparsity of a signal in one domain to its spread in another domain. These principles ensure that the measurements are incoherent with the signal's structure, allowing for accurate reconstruction. The recovery process involves solving a convex optimization problem, specifically minimizing the $\ell_1$ norm, which is effective for sparse signals.
Finally, the chapter touches on several topics, including fixed measurement systems, stability, and computational aspects, emphasizing the broad applicability of CS beyond imaging.The chapter introduces compressive sampling (CS) and its recovery via convex programming, focusing on image compression and acquisition. It begins by highlighting the importance of image compression in modern applications, where high-resolution images are feasible due to compression techniques. The central concept is to transform an image into a suitable basis and code only the important expansion coefficients, a process that has been extensively studied, particularly with wavelet transforms.
The chapter then generalizes the notion of sampling by defining measurements as inner products of the image with different test functions, rather than point evaluations or averages. This approach allows for the acquisition of information in various domains, such as Fourier coefficients, line integrals, or pixel values. The choice of test functions is crucial for minimizing the number of measurements needed to reconstruct the image faithfully.
The effectiveness of CS is demonstrated through a numerical experiment using the "Camera Man" image. Two strategies are compared: linear imaging, which measures low-pass DCT coefficients, and compressive imaging, which uses pseudorandom measurements and a nonlinear reconstruction procedure. The results show that compressive imaging provides better and faster reconstruction, especially around image edges.
The chapter also discusses uncertainty principles, which relate the sparsity of a signal in one domain to its spread in another domain. These principles ensure that the measurements are incoherent with the signal's structure, allowing for accurate reconstruction. The recovery process involves solving a convex optimization problem, specifically minimizing the $\ell_1$ norm, which is effective for sparse signals.
Finally, the chapter touches on several topics, including fixed measurement systems, stability, and computational aspects, emphasizing the broad applicability of CS beyond imaging.