LLNet: A Deep Autoencoder approach to Natural Low-light Image Enhancement

LLNet: A Deep Autoencoder approach to Natural Low-light Image Enhancement

15 Apr 2016 | Kin Gwn Lore, Adedotun Akintayo, Soumik Sarkar
This paper introduces LLNet, a deep autoencoder-based approach for enhancing low-light images. The method aims to identify signal features from low-light images and adaptively brighten them without over-amplifying lighter parts in high-dynamic-range images. The authors propose a variant of the stacked-sparse denoising autoencoder (SSDA) that can learn to enhance and denoise images from synthetically darkened and noisy training examples. The model is trained using images from internet databases, which are then synthetically processed to simulate low-light conditions. The proposed framework, LLNet, and its staged version (S-LLNet) are evaluated on both synthetic and natural low-light images, demonstrating significant improvements in visual quality and quantitative metrics such as PSNR and SSIM compared to other enhancement methods like histogram equalization, CLAHE, and gamma adjustment. The results show that deep learning-based approaches are effective for enhancing natural low-light images, and LLNet and S-LLNet perform well across various lighting and noise conditions. Future work includes improving the model's robustness to different noise types and extending its application to scenarios beyond low-light environments.This paper introduces LLNet, a deep autoencoder-based approach for enhancing low-light images. The method aims to identify signal features from low-light images and adaptively brighten them without over-amplifying lighter parts in high-dynamic-range images. The authors propose a variant of the stacked-sparse denoising autoencoder (SSDA) that can learn to enhance and denoise images from synthetically darkened and noisy training examples. The model is trained using images from internet databases, which are then synthetically processed to simulate low-light conditions. The proposed framework, LLNet, and its staged version (S-LLNet) are evaluated on both synthetic and natural low-light images, demonstrating significant improvements in visual quality and quantitative metrics such as PSNR and SSIM compared to other enhancement methods like histogram equalization, CLAHE, and gamma adjustment. The results show that deep learning-based approaches are effective for enhancing natural low-light images, and LLNet and S-LLNet perform well across various lighting and noise conditions. Future work includes improving the model's robustness to different noise types and extending its application to scenarios beyond low-light environments.
Reach us at info@study.space
[slides and audio] LLNet%3A A deep autoencoder approach to natural low-light image enhancement