13 March 2024 | Baocai Chang, Jinjiang Li, Haiyang Wang, Mengjun Li
This paper proposes an attention-based color consistency underwater image enhancement network (ACC-Net) to address the challenges of underwater image enhancement, such as color deviation, reduced contrast, and distortion caused by light refraction, scattering, and absorption. The network consists of three main components: an illumination detail network (ID-Net), a balance stretch module, and a prediction learning module. The ID-Net is responsible for generating texture structure and detail information of the image. A novel color restoration module is introduced to better match color and content feature information, maintaining color consistency. The balance stretch module compensates for pixel values using mean and maximum values, adaptively adjusting color distribution. The prediction learning module facilitates context feature interaction to obtain a reliable and effective underwater enhancement model. Experiments on three real underwater datasets show that the proposed method produces more natural enhanced images, performing well compared to state-of-the-art methods.
The paper also discusses the challenges of underwater image enhancement, including light scattering, refraction, diffusion, absorption, underwater currents, and noise. It reviews related works, including a priori-based methods, model-free methods, and deep learning-based methods. The paper highlights the limitations of previous methods, such as the instability of physical models and the focus on improving contrast and eliminating color deviations, often overlooking color consistency and naturalness. The proposed ACC-Net aims to address these limitations by focusing on color consistency, using balanced stretching to improve contrast and eliminate color deviations, and employing a CNN-transformer feature extractor to enhance information retrieval. The main contributions of the paper include the proposal of a deep illumination detail network, a balanced stretching module, and comprehensive experiments to evaluate the performance of the proposed model.This paper proposes an attention-based color consistency underwater image enhancement network (ACC-Net) to address the challenges of underwater image enhancement, such as color deviation, reduced contrast, and distortion caused by light refraction, scattering, and absorption. The network consists of three main components: an illumination detail network (ID-Net), a balance stretch module, and a prediction learning module. The ID-Net is responsible for generating texture structure and detail information of the image. A novel color restoration module is introduced to better match color and content feature information, maintaining color consistency. The balance stretch module compensates for pixel values using mean and maximum values, adaptively adjusting color distribution. The prediction learning module facilitates context feature interaction to obtain a reliable and effective underwater enhancement model. Experiments on three real underwater datasets show that the proposed method produces more natural enhanced images, performing well compared to state-of-the-art methods.
The paper also discusses the challenges of underwater image enhancement, including light scattering, refraction, diffusion, absorption, underwater currents, and noise. It reviews related works, including a priori-based methods, model-free methods, and deep learning-based methods. The paper highlights the limitations of previous methods, such as the instability of physical models and the focus on improving contrast and eliminating color deviations, often overlooking color consistency and naturalness. The proposed ACC-Net aims to address these limitations by focusing on color consistency, using balanced stretching to improve contrast and eliminate color deviations, and employing a CNN-transformer feature extractor to enhance information retrieval. The main contributions of the paper include the proposal of a deep illumination detail network, a balanced stretching module, and comprehensive experiments to evaluate the performance of the proposed model.