Image De-raining Using a Conditional Generative Adversarial Network

Image De-raining Using a Conditional Generative Adversarial Network

2 Jun 2019 | He Zhang, Member, IEEE, Vishwanath Sindagi, Student Member, IEEE Vishal M. Patel, Senior Member, IEEE
This paper proposes a conditional generative adversarial network (ID-CGAN) for single image de-raining. The method leverages the generative modeling capabilities of conditional GANs to synthesize de-rained images that are indistinguishable from their corresponding clean images. The adversarial loss from GANs provides additional regularization, leading to superior results. The method introduces a new refined loss function and architectural novelties in the generator-discriminator pair to achieve improved results. The generator uses densely connected networks, while the discriminator leverages global and local information to determine if an image is real or fake. The method is evaluated on synthetic and real images, showing superior performance in terms of quantitative and visual results. Experiments on object detection datasets using Faster-RCNN demonstrate the effectiveness of the method in improving detection performance on rain-degraded images. The proposed method addresses the challenges of single image de-raining by incorporating discriminative information into the optimization, considering visual quality, and using a multi-scale discriminator to capture both local and global information. The method also introduces a refined perceptual loss to ensure visually appealing results. The proposed ID-CGAN method achieves state-of-the-art performance in single image de-raining, outperforming existing methods in terms of quantitative and visual performance.This paper proposes a conditional generative adversarial network (ID-CGAN) for single image de-raining. The method leverages the generative modeling capabilities of conditional GANs to synthesize de-rained images that are indistinguishable from their corresponding clean images. The adversarial loss from GANs provides additional regularization, leading to superior results. The method introduces a new refined loss function and architectural novelties in the generator-discriminator pair to achieve improved results. The generator uses densely connected networks, while the discriminator leverages global and local information to determine if an image is real or fake. The method is evaluated on synthetic and real images, showing superior performance in terms of quantitative and visual results. Experiments on object detection datasets using Faster-RCNN demonstrate the effectiveness of the method in improving detection performance on rain-degraded images. The proposed method addresses the challenges of single image de-raining by incorporating discriminative information into the optimization, considering visual quality, and using a multi-scale discriminator to capture both local and global information. The method also introduces a refined perceptual loss to ensure visually appealing results. The proposed ID-CGAN method achieves state-of-the-art performance in single image de-raining, outperforming existing methods in terms of quantitative and visual performance.
Reach us at info@study.space
[slides and audio] Image De-Raining Using a Conditional Generative Adversarial Network