Continual Forgetting for Pre-trained Vision Models

Continual Forgetting for Pre-trained Vision Models

18 Jul 2024 | Hongbo Zhao, Bolin Ni, Junsong Fan, Haochen Wang, Yuxi Wang, Fei Zhu, Yuntao Chen, Gaofeng Meng, Zhaoxiang Zhang
Continual forgetting is the process of selectively removing specific knowledge from pre-trained vision models while preserving the performance on the remaining knowledge. This problem is crucial for privacy protection and reducing model bias in real-world applications. The authors propose Group Sparse LoRA (GS-LoRA), a parameter-efficient and data-efficient method that enables effective and efficient forgetting. GS-LoRA uses LoRA modules to fine-tune the FFN layers in Transformer blocks and applies group sparse regularization to automatically select and modify specific LoRA groups. This approach ensures minimal impact on the remaining knowledge and allows for efficient forgetting of specific classes. The method is tested on face recognition, object detection, and image classification tasks, demonstrating its effectiveness in maintaining performance on retained classes while forgetting specific ones. GS-LoRA is also shown to be scalable across different model sizes and effective in scenarios with incomplete replay data. The method outperforms existing continual learning and machine unlearning approaches in terms of data efficiency, parameter efficiency, and overall performance. The results indicate that GS-LoRA is a practical and effective solution for continual forgetting in pre-trained vision models.Continual forgetting is the process of selectively removing specific knowledge from pre-trained vision models while preserving the performance on the remaining knowledge. This problem is crucial for privacy protection and reducing model bias in real-world applications. The authors propose Group Sparse LoRA (GS-LoRA), a parameter-efficient and data-efficient method that enables effective and efficient forgetting. GS-LoRA uses LoRA modules to fine-tune the FFN layers in Transformer blocks and applies group sparse regularization to automatically select and modify specific LoRA groups. This approach ensures minimal impact on the remaining knowledge and allows for efficient forgetting of specific classes. The method is tested on face recognition, object detection, and image classification tasks, demonstrating its effectiveness in maintaining performance on retained classes while forgetting specific ones. GS-LoRA is also shown to be scalable across different model sizes and effective in scenarios with incomplete replay data. The method outperforms existing continual learning and machine unlearning approaches in terms of data efficiency, parameter efficiency, and overall performance. The results indicate that GS-LoRA is a practical and effective solution for continual forgetting in pre-trained vision models.
Reach us at info@study.space