2 Apr 2024 | Quan Zhang, Xiaoyu Liu, Wei Li, Hanting Chen, Junchao Liu, Jie Hu, Zhiwei Xiong, Chun Yuan, Yunhe Wang
This paper proposes a framework to distill semantic priors from the Segment Anything Model (SAM) to enhance existing image restoration (IR) models without affecting their inference efficiency. The framework consists of two key components: the Semantic Priors Fusion (SPF) scheme and the Semantic Priors Distillation (SPD) scheme with a Semantic Guided Relation (SGR) module. The SPF scheme fuses information from the restored image predicted by the original IR model and the semantic mask predicted by SAM to refine the restoration. The SPD scheme leverages self-distillation to distill the fused semantic priors to improve the performance of the original IR models. The SGR module ensures consistency in the semantic feature representation space to fully distill the priors. The framework is demonstrated across multiple IR models and tasks, including deraining, deblurring, and denoising. The results show that the framework effectively enhances the performance of existing IR models while addressing the computational challenges associated with integrating SAM. The framework preserves the inference efficiency of the original IR models by only using SAM during the training stage. The proposed framework is validated through extensive experiments on multiple IR models and tasks, demonstrating its effectiveness in leveraging SAM's semantic knowledge to improve image restoration performance.This paper proposes a framework to distill semantic priors from the Segment Anything Model (SAM) to enhance existing image restoration (IR) models without affecting their inference efficiency. The framework consists of two key components: the Semantic Priors Fusion (SPF) scheme and the Semantic Priors Distillation (SPD) scheme with a Semantic Guided Relation (SGR) module. The SPF scheme fuses information from the restored image predicted by the original IR model and the semantic mask predicted by SAM to refine the restoration. The SPD scheme leverages self-distillation to distill the fused semantic priors to improve the performance of the original IR models. The SGR module ensures consistency in the semantic feature representation space to fully distill the priors. The framework is demonstrated across multiple IR models and tasks, including deraining, deblurring, and denoising. The results show that the framework effectively enhances the performance of existing IR models while addressing the computational challenges associated with integrating SAM. The framework preserves the inference efficiency of the original IR models by only using SAM during the training stage. The proposed framework is validated through extensive experiments on multiple IR models and tasks, demonstrating its effectiveness in leveraging SAM's semantic knowledge to improve image restoration performance.