MoDL: Model Based Deep Learning Architecture for Inverse Problems

MoDL: Model Based Deep Learning Architecture for Inverse Problems

5 Jun 2019 | Hemant K. Aggarwal, Member, IEEE, Merry P. Mani, and Mathews Jacob, Senior Member, IEEE
The paper introduces a model-based deep learning framework, MoDL, for image reconstruction from noisy and sparse multichannel measurements. MoDL combines a convolutional neural network (CNN) with a regularization prior to systematically derive deep architectures for inverse problems. By explicitly accounting for the forward model, the proposed approach requires a smaller network with fewer parameters to capture image information, reducing the demand for training data and training time. The CNN weights are customized to the forward model through end-to-end training with weight sharing across iterations, improving performance over pre-trained denoisers. Experiments demonstrate that this approach decouples the number of iterations from network complexity, leading to lower training data demand, reduced overfitting risk, and a significantly reduced memory footprint. The use of numerical optimization blocks, such as conjugate gradients, within the network offers faster convergence per iteration compared to methods relying on proximal gradients. The proposed framework is evaluated on various datasets and compared with other deep learning and traditional methods, showing improved performance, especially in limited training data scenarios.The paper introduces a model-based deep learning framework, MoDL, for image reconstruction from noisy and sparse multichannel measurements. MoDL combines a convolutional neural network (CNN) with a regularization prior to systematically derive deep architectures for inverse problems. By explicitly accounting for the forward model, the proposed approach requires a smaller network with fewer parameters to capture image information, reducing the demand for training data and training time. The CNN weights are customized to the forward model through end-to-end training with weight sharing across iterations, improving performance over pre-trained denoisers. Experiments demonstrate that this approach decouples the number of iterations from network complexity, leading to lower training data demand, reduced overfitting risk, and a significantly reduced memory footprint. The use of numerical optimization blocks, such as conjugate gradients, within the network offers faster convergence per iteration compared to methods relying on proximal gradients. The proposed framework is evaluated on various datasets and compared with other deep learning and traditional methods, showing improved performance, especially in limited training data scenarios.
Reach us at info@study.space
[slides] MoDL%3A Model-Based Deep Learning Architecture for Inverse Problems | StudySpace