12 Jul 2024 | Hai Jiang, Ao Luo, Xiaohong Liu, Songchen Han, Shuaicheng Liu
The paper introduces LightenDiffusion, an unsupervised framework for low-light image enhancement that integrates Retinex theory with diffusion models. The method decomposes images into content-rich reflectance maps and content-free illumination maps within the latent space, using a content-transfer decomposition network. The reflectance map of the low-light image and the illumination map of the normal-light image are then input into a diffusion model for restoration, guided by the low-light feature. A self-constrained consistency loss is proposed to prevent interference from normal-light content, improving visual quality. Extensive experiments on various benchmarks show that LightenDiffusion outperforms existing unsupervised methods and is comparable to supervised methods, demonstrating its effectiveness and generalizability. The method also shows potential in low-light face detection, enhancing the precision of the RetinaFace detector.The paper introduces LightenDiffusion, an unsupervised framework for low-light image enhancement that integrates Retinex theory with diffusion models. The method decomposes images into content-rich reflectance maps and content-free illumination maps within the latent space, using a content-transfer decomposition network. The reflectance map of the low-light image and the illumination map of the normal-light image are then input into a diffusion model for restoration, guided by the low-light feature. A self-constrained consistency loss is proposed to prevent interference from normal-light content, improving visual quality. Extensive experiments on various benchmarks show that LightenDiffusion outperforms existing unsupervised methods and is comparable to supervised methods, demonstrating its effectiveness and generalizability. The method also shows potential in low-light face detection, enhancing the precision of the RetinaFace detector.