August 2015 | Yifan Jiang, Xinyu Gong, and Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang
EnlightenGAN is a deep learning model designed for low-light image enhancement without requiring paired training data. The model uses an unsupervised generative adversarial network (GAN) to enhance images by learning from unpaired data. It introduces a global-local discriminator structure, a self-regularized perceptual loss fusion, and an attention mechanism to improve performance. The model outperforms existing methods in terms of visual quality and subjective evaluation. EnlightenGAN is flexible and can be adapted to various real-world domains. It is trained without relying on paired data, making it suitable for scenarios where such data is unavailable. The model's effectiveness is demonstrated through experiments on multiple datasets, showing its ability to enhance low-light images with high quality and minimal artifacts. The model is also adaptable to real-world images from different domains, making it a versatile solution for low-light image enhancement. The code and pre-trained models are available for use.EnlightenGAN is a deep learning model designed for low-light image enhancement without requiring paired training data. The model uses an unsupervised generative adversarial network (GAN) to enhance images by learning from unpaired data. It introduces a global-local discriminator structure, a self-regularized perceptual loss fusion, and an attention mechanism to improve performance. The model outperforms existing methods in terms of visual quality and subjective evaluation. EnlightenGAN is flexible and can be adapted to various real-world domains. It is trained without relying on paired data, making it suitable for scenarios where such data is unavailable. The model's effectiveness is demonstrated through experiments on multiple datasets, showing its ability to enhance low-light images with high quality and minimal artifacts. The model is also adaptable to real-world images from different domains, making it a versatile solution for low-light image enhancement. The code and pre-trained models are available for use.