Learning to See in the Dark

Learning to See in the Dark

4 May 2018 | Chen Chen, Qifeng Chen, Jia Xu, Vladlen Koltun
This paper presents a new dataset and a deep learning pipeline for extreme low-light imaging. The See-in-the-Dark (SID) dataset contains 5094 raw short-exposure images, each with a corresponding long-exposure reference image. The dataset was collected under extreme low-light conditions, with illuminance levels as low as 0.03 lux. The images were captured using two cameras, Sony α7S II and Fujifilm X-T2, with different sensor types. The exposure times for the input images ranged from 1/30 to 1/10 seconds, while the corresponding reference images had exposures up to 30 seconds. The dataset is used to train a fully-convolutional network (FCN) for processing low-light images. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline. The FCN is trained end-to-end to avoid noise amplification and error accumulation. The network is tested on the SID dataset and shows promising results, with successful noise reduction and correct color transformation. The paper also compares the performance of the FCN with traditional image processing pipelines and other denoising techniques. The results show that the FCN outperforms traditional methods in extreme low-light conditions. The paper also discusses the limitations of the current approach, including the need for external amplification ratio selection and the lack of HDR tone mapping. Future work includes improving the pipeline's performance and exploring its application to real-time processing.This paper presents a new dataset and a deep learning pipeline for extreme low-light imaging. The See-in-the-Dark (SID) dataset contains 5094 raw short-exposure images, each with a corresponding long-exposure reference image. The dataset was collected under extreme low-light conditions, with illuminance levels as low as 0.03 lux. The images were captured using two cameras, Sony α7S II and Fujifilm X-T2, with different sensor types. The exposure times for the input images ranged from 1/30 to 1/10 seconds, while the corresponding reference images had exposures up to 30 seconds. The dataset is used to train a fully-convolutional network (FCN) for processing low-light images. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline. The FCN is trained end-to-end to avoid noise amplification and error accumulation. The network is tested on the SID dataset and shows promising results, with successful noise reduction and correct color transformation. The paper also compares the performance of the FCN with traditional image processing pipelines and other denoising techniques. The results show that the FCN outperforms traditional methods in extreme low-light conditions. The paper also discusses the limitations of the current approach, including the need for external amplification ratio selection and the lack of HDR tone mapping. Future work includes improving the pipeline's performance and exploring its application to real-time processing.
Reach us at info@study.space