This paper proposes a deep autoencoder-based approach, called LLNet, for enhancing natural low-light images. The method uses a stacked sparse denoising autoencoder (SSDA) to learn signal features from low-light images and adaptively brighten and denoise them. The model is trained on synthetically darkened and noise-added images to simulate low-light conditions, making it effective for images taken in natural low-light environments or degraded by hardware. The model is evaluated against other image enhancement techniques and shows significant credibility in both visual and quantitative comparisons.
The LLNet framework is trained to learn features from low-light images and adaptively brighten and denoise them. It uses a deep network structure with multiple layers to learn invariant features in low-light images. The model is trained using images from internet databases that are synthetically darkened and added with Gaussian noise to simulate low-light conditions. The model is also tested on natural low-light images captured with regular cell-phone cameras to demonstrate its effectiveness.
The paper also proposes a staged LLNet (S-LLNet) with separate modules for contrast enhancement and denoising. The S-LLNet is trained with darkened-only and noisy-only training sets, allowing for more flexibility in training and potentially better performance. However, it increases inference time slightly, which may be a concern for real-time applications.
The model is evaluated using metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The results show that LLNet and S-LLNet outperform other methods such as histogram equalization, CLAHE, gamma adjustment, and hybrid methods like HE+BM3D in terms of PSNR and SSIM. The model is also tested on natural low-light images, where it performs well in denoising and contrast enhancement.
The paper concludes that deep autoencoders are effective tools for learning underlying signal characteristics and noise structures from low-light images without hand-crafting. Future work includes training with Poisson noise and quantization artifacts, including de-blurring capability, and performing subjective evaluations by human users.This paper proposes a deep autoencoder-based approach, called LLNet, for enhancing natural low-light images. The method uses a stacked sparse denoising autoencoder (SSDA) to learn signal features from low-light images and adaptively brighten and denoise them. The model is trained on synthetically darkened and noise-added images to simulate low-light conditions, making it effective for images taken in natural low-light environments or degraded by hardware. The model is evaluated against other image enhancement techniques and shows significant credibility in both visual and quantitative comparisons.
The LLNet framework is trained to learn features from low-light images and adaptively brighten and denoise them. It uses a deep network structure with multiple layers to learn invariant features in low-light images. The model is trained using images from internet databases that are synthetically darkened and added with Gaussian noise to simulate low-light conditions. The model is also tested on natural low-light images captured with regular cell-phone cameras to demonstrate its effectiveness.
The paper also proposes a staged LLNet (S-LLNet) with separate modules for contrast enhancement and denoising. The S-LLNet is trained with darkened-only and noisy-only training sets, allowing for more flexibility in training and potentially better performance. However, it increases inference time slightly, which may be a concern for real-time applications.
The model is evaluated using metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The results show that LLNet and S-LLNet outperform other methods such as histogram equalization, CLAHE, gamma adjustment, and hybrid methods like HE+BM3D in terms of PSNR and SSIM. The model is also tested on natural low-light images, where it performs well in denoising and contrast enhancement.
The paper concludes that deep autoencoders are effective tools for learning underlying signal characteristics and noise structures from low-light images without hand-crafting. Future work includes training with Poisson noise and quantization artifacts, including de-blurring capability, and performing subjective evaluations by human users.