Deep Learning-Based Technique for Remote Sensing Image Enhancement Using Multiscale Feature Fusion

Deep Learning-Based Technique for Remote Sensing Image Enhancement Using Multiscale Feature Fusion

21 January 2024 | Ming Zhao, Rui Yang, Min Hu, Botao Liu
This paper presents a novel deep-learning model, GSA-Net, for enhancing remote sensing images, particularly in low-light conditions. The model aims to maintain image details while improving brightness through an improved hierarchical structure based on U-Net. To address the issue of insufficient sample data, gamma correction is applied to create low-light images, which are then used for training. The loss function combines Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) to guide the model's optimization. The proposed method is evaluated on the NWPU VHR-10 dataset, demonstrating superior performance compared to other state-of-the-art algorithms in terms of PSNR, SSIM, and Learned Perceptual Image Patch Similarity (LPIPS). Additionally, the enhanced images are shown to improve object detection accuracy in remote sensing applications. Key contributions include the use of depthwise separable convolutions to reduce model parameters, a global spatial attention mechanism to enhance local and global information fusion, and an improved loss function to enhance model convergence. The experimental results validate the effectiveness and robustness of the proposed GSA-Net in low-light image enhancement.This paper presents a novel deep-learning model, GSA-Net, for enhancing remote sensing images, particularly in low-light conditions. The model aims to maintain image details while improving brightness through an improved hierarchical structure based on U-Net. To address the issue of insufficient sample data, gamma correction is applied to create low-light images, which are then used for training. The loss function combines Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) to guide the model's optimization. The proposed method is evaluated on the NWPU VHR-10 dataset, demonstrating superior performance compared to other state-of-the-art algorithms in terms of PSNR, SSIM, and Learned Perceptual Image Patch Similarity (LPIPS). Additionally, the enhanced images are shown to improve object detection accuracy in remote sensing applications. Key contributions include the use of depthwise separable convolutions to reduce model parameters, a global spatial attention mechanism to enhance local and global information fusion, and an improved loss function to enhance model convergence. The experimental results validate the effectiveness and robustness of the proposed GSA-Net in low-light image enhancement.
Reach us at info@study.space
[slides] Deep Learning-Based Technique for Remote Sensing Image Enhancement Using Multiscale Feature Fusion | StudySpace