Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

7 May 2018 | Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee
This paper proposes a deep multi-scale convolutional neural network (DMS-CNN) for dynamic scene deblurring, which addresses the challenge of removing motion blur caused by multiple object motions, camera shake, and scene depth variation. Traditional methods rely on simplified blur kernel assumptions, while recent machine learning approaches depend on synthetic datasets under these assumptions, leading to poor performance in complex blur scenarios. The proposed DMS-CNN restores sharp images end-to-end without assuming specific blur kernel models, using a multi-scale architecture that mimics conventional coarse-to-fine optimization methods. A new large-scale dataset is introduced, containing pairs of realistic blurry images and their corresponding sharp ground truth images generated by a high-speed camera. The model is trained on this dataset and achieves state-of-the-art performance in both qualitative and quantitative evaluations. The DMS-CNN employs a multi-scale loss function that enhances convergence and an adversarial loss to improve results. It avoids explicit blur kernel estimation, reducing artifacts caused by kernel errors. The model is trained on a dataset generated by capturing sharp frames and averaging them to simulate blurring. The model is fully convolutional and can handle general local blur kernels implicitly. Experimental results on the GOPRO dataset and other benchmarks show that the proposed method outperforms existing methods in terms of image quality and computational efficiency. The model is trained with a multi-scale architecture and residual network blocks, and uses a combination of multi-scale content loss and adversarial loss for training. The model is evaluated on multiple datasets, including the Köhler dataset and the Lai et al. dataset, demonstrating its effectiveness in deblurring dynamic scenes. The proposed method is efficient, accurate, and suitable for real-world applications.This paper proposes a deep multi-scale convolutional neural network (DMS-CNN) for dynamic scene deblurring, which addresses the challenge of removing motion blur caused by multiple object motions, camera shake, and scene depth variation. Traditional methods rely on simplified blur kernel assumptions, while recent machine learning approaches depend on synthetic datasets under these assumptions, leading to poor performance in complex blur scenarios. The proposed DMS-CNN restores sharp images end-to-end without assuming specific blur kernel models, using a multi-scale architecture that mimics conventional coarse-to-fine optimization methods. A new large-scale dataset is introduced, containing pairs of realistic blurry images and their corresponding sharp ground truth images generated by a high-speed camera. The model is trained on this dataset and achieves state-of-the-art performance in both qualitative and quantitative evaluations. The DMS-CNN employs a multi-scale loss function that enhances convergence and an adversarial loss to improve results. It avoids explicit blur kernel estimation, reducing artifacts caused by kernel errors. The model is trained on a dataset generated by capturing sharp frames and averaging them to simulate blurring. The model is fully convolutional and can handle general local blur kernels implicitly. Experimental results on the GOPRO dataset and other benchmarks show that the proposed method outperforms existing methods in terms of image quality and computational efficiency. The model is trained with a multi-scale architecture and residual network blocks, and uses a combination of multi-scale content loss and adversarial loss for training. The model is evaluated on multiple datasets, including the Köhler dataset and the Lai et al. dataset, demonstrating its effectiveness in deblurring dynamic scenes. The proposed method is efficient, accurate, and suitable for real-world applications.
Reach us at info@study.space