November 14, 2016 | Kyong Hwan Jin, Michael T. McCann, Member, IEEE, Emmanuel Froustey, Michael Unser, Fellow, IEEE
This paper introduces a novel deep convolutional neural network (CNN) approach to solving ill-posed inverse problems in imaging, particularly focusing on sparse-view X-ray computed tomography (CT). The authors observe that iterative methods for solving these problems can be formulated as CNNs when the normal operator of the forward model is a convolution. They propose combining direct inversion, which encapsulates the physical model of the system, with a CNN to remove artifacts and preserve image structure. The proposed method, called FBPCONVNET, uses a U-net architecture with residual learning and multiresolution decomposition. Experimental results on synthetic and real datasets show that FBPCONVNET outperforms traditional iterative reconstruction methods, especially in preserving fine details in reconstructed images. The network also demonstrates faster computation times, with a reconstruction time of less than a second for a 512x512 image on a GPU. The authors discuss limitations, such as the lack of transferability between different datasets, and suggest future work on adapting the method to other imaging modalities.This paper introduces a novel deep convolutional neural network (CNN) approach to solving ill-posed inverse problems in imaging, particularly focusing on sparse-view X-ray computed tomography (CT). The authors observe that iterative methods for solving these problems can be formulated as CNNs when the normal operator of the forward model is a convolution. They propose combining direct inversion, which encapsulates the physical model of the system, with a CNN to remove artifacts and preserve image structure. The proposed method, called FBPCONVNET, uses a U-net architecture with residual learning and multiresolution decomposition. Experimental results on synthetic and real datasets show that FBPCONVNET outperforms traditional iterative reconstruction methods, especially in preserving fine details in reconstructed images. The network also demonstrates faster computation times, with a reconstruction time of less than a second for a 512x512 image on a GPU. The authors discuss limitations, such as the lack of transferability between different datasets, and suggest future work on adapting the method to other imaging modalities.