26 Jun 2024 | Yiguo Jiang · Xuhang Chen · Chi-Man Pun · Shuqiang Wang · Wei Feng
The paper introduces MFDNet, a lightweight and efficient network for removing flare artifacts from nighttime photographs. Flare artifacts, caused by light scattering and reflection in camera lenses, degrade image quality and visual information. Traditional methods, including hardware-based and software-based approaches, often fail to effectively remove various flare patterns while preserving image details. Deep learning-based methods have shown promise but suffer from high computational costs and limited receptive fields.
MFDNet addresses these challenges by decomposing the input image into low and high-frequency bands using the Laplacian Pyramid. The network consists of two main modules: the Low-Frequency Flare Perception Module (LFFPM) and the Hierarchical Fusion Reconstruction Module (HFRM). LFFPM uses Transformers to capture global features and convolutional neural networks to capture local features, effectively removing flares in the low-frequency band. HFRM then fuses the low-frequency flare-free image with the high-frequency content to reconstruct the final image.
Experimental results on the Flare7K dataset demonstrate that MFDNet outperforms state-of-the-art methods in terms of PSNR, SSIM, and LPIPS metrics while maintaining low computational complexity. The method is scalable and can handle images from 512×512 to 4K resolution, making it suitable for real-world applications. Ablation studies and limitations are also discussed, highlighting the effectiveness of each component and potential areas for future improvement.The paper introduces MFDNet, a lightweight and efficient network for removing flare artifacts from nighttime photographs. Flare artifacts, caused by light scattering and reflection in camera lenses, degrade image quality and visual information. Traditional methods, including hardware-based and software-based approaches, often fail to effectively remove various flare patterns while preserving image details. Deep learning-based methods have shown promise but suffer from high computational costs and limited receptive fields.
MFDNet addresses these challenges by decomposing the input image into low and high-frequency bands using the Laplacian Pyramid. The network consists of two main modules: the Low-Frequency Flare Perception Module (LFFPM) and the Hierarchical Fusion Reconstruction Module (HFRM). LFFPM uses Transformers to capture global features and convolutional neural networks to capture local features, effectively removing flares in the low-frequency band. HFRM then fuses the low-frequency flare-free image with the high-frequency content to reconstruct the final image.
Experimental results on the Flare7K dataset demonstrate that MFDNet outperforms state-of-the-art methods in terms of PSNR, SSIM, and LPIPS metrics while maintaining low computational complexity. The method is scalable and can handle images from 512×512 to 4K resolution, making it suitable for real-world applications. Ablation studies and limitations are also discussed, highlighting the effectiveness of each component and potential areas for future improvement.