MoDL: Model Based Deep Learning Architecture for Inverse Problems

MoDL: Model Based Deep Learning Architecture for Inverse Problems

5 Jun 2019 | Hemant K. Aggarwal, Member, IEEE, Merry P. Mani, and Mathews Jacob, Senior Member, IEEE
MoDL is a model-based deep learning architecture for inverse problems that integrates a convolutional neural network (CNN) as a regularization prior. The framework systematically derives deep architectures for inverse problems with arbitrary structures, leveraging the forward model explicitly to reduce the need for large networks and training data. By using end-to-end training with shared weights across iterations, the CNN is customized to the forward model, improving performance over pre-trained denoisers. The approach enforces data consistency using numerical optimization blocks like conjugate gradients, leading to faster convergence and better performance, especially when GPU memory limits iterations. The framework uses an alternating recursive algorithm, unrolling it into a deep network with interleaved CNN blocks and data consistency blocks. This architecture allows for efficient training with fewer parameters and better performance in data-constrained settings. Experiments show that MoDL outperforms other methods in reconstruction quality, particularly under limited training data and varying acceleration factors. The framework is robust to different acquisition settings and can be applied to various imaging tasks, including MRI and super-resolution. MoDL's end-to-end training strategy, combined with weight sharing and conjugate gradient optimization, provides significant improvements in performance and efficiency compared to traditional methods. The results demonstrate that MoDL achieves higher PSNR values and better reconstruction quality across different acceleration factors and data settings.MoDL is a model-based deep learning architecture for inverse problems that integrates a convolutional neural network (CNN) as a regularization prior. The framework systematically derives deep architectures for inverse problems with arbitrary structures, leveraging the forward model explicitly to reduce the need for large networks and training data. By using end-to-end training with shared weights across iterations, the CNN is customized to the forward model, improving performance over pre-trained denoisers. The approach enforces data consistency using numerical optimization blocks like conjugate gradients, leading to faster convergence and better performance, especially when GPU memory limits iterations. The framework uses an alternating recursive algorithm, unrolling it into a deep network with interleaved CNN blocks and data consistency blocks. This architecture allows for efficient training with fewer parameters and better performance in data-constrained settings. Experiments show that MoDL outperforms other methods in reconstruction quality, particularly under limited training data and varying acceleration factors. The framework is robust to different acquisition settings and can be applied to various imaging tasks, including MRI and super-resolution. MoDL's end-to-end training strategy, combined with weight sharing and conjugate gradient optimization, provides significant improvements in performance and efficiency compared to traditional methods. The results demonstrate that MoDL achieves higher PSNR values and better reconstruction quality across different acceleration factors and data settings.
Reach us at info@study.space
[slides and audio] MoDL%3A Model-Based Deep Learning Architecture for Inverse Problems