Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey

Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey

8 Feb 2024 | Yi Xin1, Siqi Luo1,2, Haodi Zhou1, Junlong Du3, Xiaohong Liu2, Yue Fan4, Qing Li4, Yuntao Du4*
This paper provides a comprehensive survey of parameter-efficient fine-tuning (PEFT) methods for pre-trained vision models (PVMs). It begins by defining PEFT and discussing model pre-training methods, categorizing existing PEFT methods into three categories: addition-based, partial-based, and unified-based. The paper then delves into the details of each category, presenting representative methods and their applications. It also discusses popular datasets and applications of visual PEFT, highlighting its broad impact across various downstream tasks such as image classification, video action recognition, and dense prediction. Finally, the paper identifies future research challenges, including improving explainability, exploring generative and multimodal models, and building a visual PEFT library. The goal is to serve as a valuable resource for researchers interested in parameter-efficient fine-tuning, providing insights that could inspire further advancements.This paper provides a comprehensive survey of parameter-efficient fine-tuning (PEFT) methods for pre-trained vision models (PVMs). It begins by defining PEFT and discussing model pre-training methods, categorizing existing PEFT methods into three categories: addition-based, partial-based, and unified-based. The paper then delves into the details of each category, presenting representative methods and their applications. It also discusses popular datasets and applications of visual PEFT, highlighting its broad impact across various downstream tasks such as image classification, video action recognition, and dense prediction. Finally, the paper identifies future research challenges, including improving explainability, exploring generative and multimodal models, and building a visual PEFT library. The goal is to serve as a valuable resource for researchers interested in parameter-efficient fine-tuning, providing insights that could inspire further advancements.
Reach us at info@study.space
Understanding Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models%3A A Survey