EnlightenGAN: Deep Light Enhancement without Paired Supervision

EnlightenGAN: Deep Light Enhancement without Paired Supervision

VOL. 14, NO. 8, AUGUST 2015 | Yifan Jiang, Xinyu Gong, and Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang
**EnlightenGAN: Deep Light Enhancement without Paired Supervision** This paper addresses the challenge of low-light image enhancement, a task that is difficult to perform when paired training data (low-light and normal-light images) are unavailable. The authors propose EnlightenGAN, an unsupervised generative adversarial network (GAN) that can be trained without paired low/normal-light image pairs. Instead of using ground truth data for supervision, EnlightenGAN regularizes the unpaired training using information extracted from the input images. Key innovations include a global-local discriminator structure, a self-regularized perceptual loss fusion, and an attention mechanism. Extensive experiments demonstrate that EnlightenGAN outperforms existing methods in terms of visual quality and subjective user studies. The flexibility of unpaired training allows EnlightenGAN to be easily adapted to enhance real-world images from various domains. The code and pre-trained models are available at <https://github.com/VITA-Group/EnlightenGAN>. - **Low-light Image Enhancement**: Challenges and Importance - **State-of-the-Art Methods**: Limitations and Shortcomings - **EnlightenGAN Architecture**: Global-Local Discriminators, Self Feature Preserving Loss, and Attention Mechanism - **Experiments**: Ablation Study, Comparison with State-of-the-Art, Adaptation on Real-world Images, and Pre-Processing for Improving Classification - **Conclusion**: Overview of Contributions and Future Work Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang**EnlightenGAN: Deep Light Enhancement without Paired Supervision** This paper addresses the challenge of low-light image enhancement, a task that is difficult to perform when paired training data (low-light and normal-light images) are unavailable. The authors propose EnlightenGAN, an unsupervised generative adversarial network (GAN) that can be trained without paired low/normal-light image pairs. Instead of using ground truth data for supervision, EnlightenGAN regularizes the unpaired training using information extracted from the input images. Key innovations include a global-local discriminator structure, a self-regularized perceptual loss fusion, and an attention mechanism. Extensive experiments demonstrate that EnlightenGAN outperforms existing methods in terms of visual quality and subjective user studies. The flexibility of unpaired training allows EnlightenGAN to be easily adapted to enhance real-world images from various domains. The code and pre-trained models are available at <https://github.com/VITA-Group/EnlightenGAN>. - **Low-light Image Enhancement**: Challenges and Importance - **State-of-the-Art Methods**: Limitations and Shortcomings - **EnlightenGAN Architecture**: Global-Local Discriminators, Self Feature Preserving Loss, and Attention Mechanism - **Experiments**: Ablation Study, Comparison with State-of-the-Art, Adaptation on Real-world Images, and Pre-Processing for Improving Classification - **Conclusion**: Overview of Contributions and Future Work Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang
Reach us at info@study.space