Towards Realistic Data Generation for Real-World Super-Resolution

Towards Realistic Data Generation for Real-World Super-Resolution

2024-06-12 | Long Peng, Wenbo Li, Renjing Pei, Jingjing Ren, Yang Wang, Xueyang Fu, Yang Cao, Zheng-Jun Zha
This paper introduces RealDGen, an unsupervised learning framework for generating realistic and diverse real-world super-resolution data. RealDGen addresses the challenge of generating large-scale, high-quality paired data that mirrors real-world degradation patterns, which is crucial for improving the generalization of super-resolution (SR) models. The framework decouples content and degradation, allowing for the generation of realistic low-resolution (LR) images from unpaired real LR and high-resolution (HR) images. It employs content and degradation extractors to capture robust representations, which are then integrated into a diffusion model to generate realistic LR images. The training process involves pre-training the extractors using contrastive and reconstruction learning strategies, followed by fine-tuning the extractors to adapt to different degradation patterns. The framework is evaluated on various real-world benchmarks, demonstrating superior performance in generating realistic data and enhancing the performance of popular SR models. RealDGen outperforms existing methods in generating realistic paired data and achieving better results on real-world benchmarks. The method is also shown to generalize well to out-of-distribution data, demonstrating its effectiveness in real-world scenarios. The paper highlights the importance of generating realistic and adaptive data for improving the performance of SR models in real-world applications.This paper introduces RealDGen, an unsupervised learning framework for generating realistic and diverse real-world super-resolution data. RealDGen addresses the challenge of generating large-scale, high-quality paired data that mirrors real-world degradation patterns, which is crucial for improving the generalization of super-resolution (SR) models. The framework decouples content and degradation, allowing for the generation of realistic low-resolution (LR) images from unpaired real LR and high-resolution (HR) images. It employs content and degradation extractors to capture robust representations, which are then integrated into a diffusion model to generate realistic LR images. The training process involves pre-training the extractors using contrastive and reconstruction learning strategies, followed by fine-tuning the extractors to adapt to different degradation patterns. The framework is evaluated on various real-world benchmarks, demonstrating superior performance in generating realistic data and enhancing the performance of popular SR models. RealDGen outperforms existing methods in generating realistic paired data and achieving better results on real-world benchmarks. The method is also shown to generalize well to out-of-distribution data, demonstrating its effectiveness in real-world scenarios. The paper highlights the importance of generating realistic and adaptive data for improving the performance of SR models in real-world applications.
Reach us at info@study.space