NTIRE 2024 Restore Any Image Model (RAIM) in the Wild Challenge

NTIRE 2024 Restore Any Image Model (RAIM) in the Wild Challenge

16 May 2024 | Jie Liang, Radu Timofte, Qiaosi Yi, Shuaizheng Liu, Lingchen Sun, Rongyuan Wu, Xindong Zhang, Hui Zeng, Lei Zhang, Yibin Huang, Shuai Liu, Yongqiang Li, Chaoyu Feng, Xiaotao Wang, Lei Lei, Yuxiang Chen, Xiangyu Chen, Qiubo Chen, Fengyu Sun, Mengying Cui, Jiaxu Chen, Zhenyu Hu, Jingyun Liu, Wenzhuo Ma, Ce Wang, Hanyou Zheng, Wanjie Sun, Zhenzhong Chen, Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao, Jens Sjölund, Thomas B. Schön, Xiong Dun, Pengzhou Ji, Yujie Xing, Xuquan Wang, Zhanshan Wang, Xinbin Cheng, Jun Xiao, Chenhang He, Xiuyuan Wang, Zhi-Song Liu, Zimeng Miao, Zhicun Yin, Ming Liu, Wangmeng Zuo, Shuai Li
The NTIRE 2024 Restore Any Image Model (RAIM) in the Wild Challenge aimed to bridge the gap between academic research and practical image restoration applications. The challenge focused on restoring real-world images from complex and unknown degradation, emphasizing generative perceptual quality and fidelity. It consisted of two tasks: one using paired data with reference ground truth (R-GT) for quantitative evaluation, and another using unpaired images for a comprehensive user study. Over 200 participants registered, with 39 submitting results, and the top-ranked methods significantly improved state-of-the-art performance. The challenge provided two types of validation and test data: paired data with R-GT and unpaired data. The paired data included scenarios such as image denoising, super-resolution, out-of-focus restoration, and motion deblur. The unpaired data covered issues like smoothed details, text stroke adhesion, high-light edge artifacts, and low-frequency color noise. Evaluation measures included quantitative metrics (PSNR, SSIM, LPIPS, DISTS, NIQE) and subjective evaluation by 18 experienced practitioners. The challenge phases involved model design and tuning, online feedback, and final evaluation, with awards for top performers. Several teams contributed innovative methods, including Wavelet UNet with Hybrid Transformer and CNN, combining SUPIR and DeSRRA, a consistency-guided stable diffusion method, an integrated framework for degradation-aware image restoration, a photo-realistic image restoration method with enriched vision-language features, a DRBFormer-StableSR fusion network, and a DiffIR-based approach. The challenge's datasets and results are available online, and the organizers acknowledged support from various sponsors and institutions.The NTIRE 2024 Restore Any Image Model (RAIM) in the Wild Challenge aimed to bridge the gap between academic research and practical image restoration applications. The challenge focused on restoring real-world images from complex and unknown degradation, emphasizing generative perceptual quality and fidelity. It consisted of two tasks: one using paired data with reference ground truth (R-GT) for quantitative evaluation, and another using unpaired images for a comprehensive user study. Over 200 participants registered, with 39 submitting results, and the top-ranked methods significantly improved state-of-the-art performance. The challenge provided two types of validation and test data: paired data with R-GT and unpaired data. The paired data included scenarios such as image denoising, super-resolution, out-of-focus restoration, and motion deblur. The unpaired data covered issues like smoothed details, text stroke adhesion, high-light edge artifacts, and low-frequency color noise. Evaluation measures included quantitative metrics (PSNR, SSIM, LPIPS, DISTS, NIQE) and subjective evaluation by 18 experienced practitioners. The challenge phases involved model design and tuning, online feedback, and final evaluation, with awards for top performers. Several teams contributed innovative methods, including Wavelet UNet with Hybrid Transformer and CNN, combining SUPIR and DeSRRA, a consistency-guided stable diffusion method, an integrated framework for degradation-aware image restoration, a photo-realistic image restoration method with enriched vision-language features, a DRBFormer-StableSR fusion network, and a DiffIR-based approach. The challenge's datasets and results are available online, and the organizers acknowledged support from various sponsors and institutions.
Reach us at info@study.space