Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

7 May 2018 | Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee
This paper presents a deep multi-scale convolutional neural network (CNN) for dynamic scene deblurring, addressing the challenge of removing complex motion blurs caused by multiple object motions, camera shake, and scene depth variation. The proposed method does not rely on assumptions about the blur kernel, such as partial uniformity or local linearity, which are common in conventional energy optimization-based methods and recent machine learning approaches. Instead, it employs a multi-scale loss function that mimics coarse-to-fine optimization techniques and a new large-scale dataset generated using a high-speed camera to capture realistic blurry images and corresponding sharp ground truth images. The model is trained end-to-end to restore sharp images directly without estimating explicit blur kernels, thereby avoiding artifacts caused by kernel estimation errors. Experimental results on the proposed GOPRO dataset and other benchmarks demonstrate that the proposed method outperforms state-of-the-art methods in both qualitative and quantitative evaluations, achieving superior performance in dynamic scene deblurring.This paper presents a deep multi-scale convolutional neural network (CNN) for dynamic scene deblurring, addressing the challenge of removing complex motion blurs caused by multiple object motions, camera shake, and scene depth variation. The proposed method does not rely on assumptions about the blur kernel, such as partial uniformity or local linearity, which are common in conventional energy optimization-based methods and recent machine learning approaches. Instead, it employs a multi-scale loss function that mimics coarse-to-fine optimization techniques and a new large-scale dataset generated using a high-speed camera to capture realistic blurry images and corresponding sharp ground truth images. The model is trained end-to-end to restore sharp images directly without estimating explicit blur kernels, thereby avoiding artifacts caused by kernel estimation errors. Experimental results on the proposed GOPRO dataset and other benchmarks demonstrate that the proposed method outperforms state-of-the-art methods in both qualitative and quantitative evaluations, achieving superior performance in dynamic scene deblurring.
Reach us at info@study.space