12 Jul 2024 | Hai Jiang, Ao Luo, Xiaohong Liu, Songchen Han, Shuaicheng Liu
LightenDiffusion is an unsupervised low-light image enhancement framework that integrates Retinex theory with diffusion models. The method employs a content-transfer decomposition network (CTDN) to decompose latent space features into content-rich reflectance maps and content-free illumination maps, enabling effective unsupervised restoration. The reflectance map of low-light images and the illumination map of normal-light images are used as inputs to a diffusion model, which is guided by low-light features to restore images. A self-constrained consistency loss is introduced to ensure the restored images retain the intrinsic content information of the low-light input, improving visual quality. Extensive experiments on real-world benchmarks show that LightenDiffusion outperforms state-of-the-art unsupervised methods and is comparable to supervised methods, demonstrating strong generalization. The method is effective in enhancing low-light images, improving contrast, color, and reducing artifacts, and is applicable to various scenes, including low-light face detection. The framework is trained on unpaired data and leverages diffusion models for generative ability, achieving superior performance in both quantitative and qualitative evaluations.LightenDiffusion is an unsupervised low-light image enhancement framework that integrates Retinex theory with diffusion models. The method employs a content-transfer decomposition network (CTDN) to decompose latent space features into content-rich reflectance maps and content-free illumination maps, enabling effective unsupervised restoration. The reflectance map of low-light images and the illumination map of normal-light images are used as inputs to a diffusion model, which is guided by low-light features to restore images. A self-constrained consistency loss is introduced to ensure the restored images retain the intrinsic content information of the low-light input, improving visual quality. Extensive experiments on real-world benchmarks show that LightenDiffusion outperforms state-of-the-art unsupervised methods and is comparable to supervised methods, demonstrating strong generalization. The method is effective in enhancing low-light images, improving contrast, color, and reducing artifacts, and is applicable to various scenes, including low-light face detection. The framework is trained on unpaired data and leverages diffusion models for generative ability, achieving superior performance in both quantitative and qualitative evaluations.