19 Apr 2019 | Shi Guo, Zifei Yan, Kai Zhang, Wangmeng Zuo, Lei Zhang
This paper proposes a convolutional blind denoising network (CBDNet) for real-world noisy photographs. The main challenge in denoising real-world images is that deep convolutional neural networks (CNNs) trained on additive white Gaussian noise (AWGN) often fail to generalize well to real-world noise, which is more complex and signal-dependent. To address this, CBDNet is trained using a realistic noise model that incorporates both signal-dependent noise and in-camera processing pipeline, such as demosaicing, Gamma correction, and JPEG compression. This realistic noise model helps the network better generalize to real-world noise.
CBDNet consists of two subnetworks: a noise estimation subnetwork and a non-blind denoising subnetwork. The noise estimation subnetwork is used to estimate the noise level map, which is then used by the denoising subnetwork to improve the denoising performance. An asymmetric loss function is introduced to suppress under-estimation of noise level, making the network more robust to mismatch between the noise model and real-world noise.
The network is trained using both synthetic and real noisy images. Synthetic images are generated using a realistic noise model, while real images are used to provide additional training data. The combination of synthetic and real images helps improve the generalization ability of the network.
Extensive experiments on three real-world noisy image datasets (NC12, DND, and Nam) show that CBDNet outperforms state-of-the-art methods in terms of both quantitative metrics (PSNR, SSIM) and visual quality. CBDNet achieves state-of-the-art results in denoising real-world noisy photographs, preserving image structure and details while effectively removing complex noise. The code for CBDNet is available at https://github.com/GuoShi28/CBDNet.This paper proposes a convolutional blind denoising network (CBDNet) for real-world noisy photographs. The main challenge in denoising real-world images is that deep convolutional neural networks (CNNs) trained on additive white Gaussian noise (AWGN) often fail to generalize well to real-world noise, which is more complex and signal-dependent. To address this, CBDNet is trained using a realistic noise model that incorporates both signal-dependent noise and in-camera processing pipeline, such as demosaicing, Gamma correction, and JPEG compression. This realistic noise model helps the network better generalize to real-world noise.
CBDNet consists of two subnetworks: a noise estimation subnetwork and a non-blind denoising subnetwork. The noise estimation subnetwork is used to estimate the noise level map, which is then used by the denoising subnetwork to improve the denoising performance. An asymmetric loss function is introduced to suppress under-estimation of noise level, making the network more robust to mismatch between the noise model and real-world noise.
The network is trained using both synthetic and real noisy images. Synthetic images are generated using a realistic noise model, while real images are used to provide additional training data. The combination of synthetic and real images helps improve the generalization ability of the network.
Extensive experiments on three real-world noisy image datasets (NC12, DND, and Nam) show that CBDNet outperforms state-of-the-art methods in terms of both quantitative metrics (PSNR, SSIM) and visual quality. CBDNet achieves state-of-the-art results in denoising real-world noisy photographs, preserving image structure and details while effectively removing complex noise. The code for CBDNet is available at https://github.com/GuoShi28/CBDNet.