Bidirectional Multi-Scale Implicit Neural Representations for Image Deraining

Bidirectional Multi-Scale Implicit Neural Representations for Image Deraining

2 Apr 2024 | Xiang Chen Jinshan Pan* Jiangxin Dong
The paper "Bidirectional Multi-Scale Implicit Neural Representations for Image Deraining" by Xiang Chen, Jinshan Pan, and Jiangxin Dong from Nanjing University of Science and Technology introduces an end-to-end multi-scale Transformer-based method for image deraining. The authors address the limitations of existing Transformer-based methods, which often rely on single-scale rain appearance, by developing a bidirectional multi-scale Transformer that leverages features at multiple scales to improve image reconstruction quality. They incorporate intra-scale implicit neural representations (INRs) based on pixel coordinates with degraded inputs in a closed-loop design, enabling the learned features to facilitate rain removal and enhance model robustness in complex scenarios. Additionally, they introduce an inter-scale bidirectional feedback operation to improve the collaboration of features across different scales. Extensive experiments on synthetic and real-world datasets demonstrate that their approach, named NeRD-Rain, outperforms state-of-the-art methods in terms of PSNR and SSIM metrics. The paper also includes a detailed analysis of the effectiveness of the proposed components and discusses limitations and future work.The paper "Bidirectional Multi-Scale Implicit Neural Representations for Image Deraining" by Xiang Chen, Jinshan Pan, and Jiangxin Dong from Nanjing University of Science and Technology introduces an end-to-end multi-scale Transformer-based method for image deraining. The authors address the limitations of existing Transformer-based methods, which often rely on single-scale rain appearance, by developing a bidirectional multi-scale Transformer that leverages features at multiple scales to improve image reconstruction quality. They incorporate intra-scale implicit neural representations (INRs) based on pixel coordinates with degraded inputs in a closed-loop design, enabling the learned features to facilitate rain removal and enhance model robustness in complex scenarios. Additionally, they introduce an inter-scale bidirectional feedback operation to improve the collaboration of features across different scales. Extensive experiments on synthetic and real-world datasets demonstrate that their approach, named NeRD-Rain, outperforms state-of-the-art methods in terms of PSNR and SSIM metrics. The paper also includes a detailed analysis of the effectiveness of the proposed components and discusses limitations and future work.
Reach us at info@study.space