Continual Forgetting for Pre-trained Vision Models

Continual Forgetting for Pre-trained Vision Models

18 Jul 2024 | Hongbo Zhao, Bolin Ni, Junsong Fan, Haochen Wang, Yuxi Wang, Fei Zhu, Yuntao Chen, Gaofeng Meng, Zhaoxiang Zhang
The paper addresses the problem of continual forgetting in pre-trained vision models, which involves the selective removal of specific knowledge while maintaining the performance of the remaining knowledge. The authors propose Group Sparse LoRA (GS-LoRA), a method that uses LoRA modules to fine-tune the FFN layers in Transformer blocks for each forgetting task independently. A group sparse regularization is adopted to enable automatic selection of specific LoRA groups and zeroing out the others, ensuring efficient and effective forgetting. GS-LoRA is designed to be parameter-efficient, data-efficient, and easy to implement. Extensive experiments on face recognition, object detection, and image classification tasks demonstrate that GS-LoRA effectively forgets specific classes with minimal impact on other classes. The method is shown to be applicable to large models and scalable across different sizes.The paper addresses the problem of continual forgetting in pre-trained vision models, which involves the selective removal of specific knowledge while maintaining the performance of the remaining knowledge. The authors propose Group Sparse LoRA (GS-LoRA), a method that uses LoRA modules to fine-tune the FFN layers in Transformer blocks for each forgetting task independently. A group sparse regularization is adopted to enable automatic selection of specific LoRA groups and zeroing out the others, ensuring efficient and effective forgetting. GS-LoRA is designed to be parameter-efficient, data-efficient, and easy to implement. Extensive experiments on face recognition, object detection, and image classification tasks demonstrate that GS-LoRA effectively forgets specific classes with minimal impact on other classes. The method is shown to be applicable to large models and scalable across different sizes.
Reach us at info@study.space
Understanding Continual Forgetting for Pre-Trained Vision Models