Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models

Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models

29 May 2024 | Jiaqi Li, Qianshan Wei, Chuanyi Zhang, Guilin Qi, Miaozeng Du, Yongrui Chen, Sheng Bi
This paper introduces Single Image Unlearning (SIU), an efficient method for machine unlearning in Multimodal Large Language Models (MLLMs). SIU enables the removal of visual recognition of specific concepts using only a single training image. The method is based on two key components: (1) the construction of multifaceted fine-tuning data, and (2) the use of a Dual Masked KL-divergence (DMK) loss to jointly train with Cross Entropy loss. SIU is evaluated on MMUBench, a new benchmark for machine unlearning in MLLMs, which includes a dataset of 20 concepts with at least 50 images each. The results show that SIU outperforms existing methods in all evaluation metrics, including efficacy, generality, specificity, fluency, and diversity. Additionally, SIU is found to be robust against membership inference attacks and jailbreak attacks. The paper also discusses the challenges of unlearning in MLLMs, including limited training data and model degradation, and proposes solutions to address these issues. The method is shown to be effective in unlearning visual recognition of concepts while preserving the utility of MLLMs. The paper concludes that SIU is a promising approach for machine unlearning in MLLMs and highlights the importance of further research in this area.This paper introduces Single Image Unlearning (SIU), an efficient method for machine unlearning in Multimodal Large Language Models (MLLMs). SIU enables the removal of visual recognition of specific concepts using only a single training image. The method is based on two key components: (1) the construction of multifaceted fine-tuning data, and (2) the use of a Dual Masked KL-divergence (DMK) loss to jointly train with Cross Entropy loss. SIU is evaluated on MMUBench, a new benchmark for machine unlearning in MLLMs, which includes a dataset of 20 concepts with at least 50 images each. The results show that SIU outperforms existing methods in all evaluation metrics, including efficacy, generality, specificity, fluency, and diversity. Additionally, SIU is found to be robust against membership inference attacks and jailbreak attacks. The paper also discusses the challenges of unlearning in MLLMs, including limited training data and model degradation, and proposes solutions to address these issues. The method is shown to be effective in unlearning visual recognition of concepts while preserving the utility of MLLMs. The paper concludes that SIU is a promising approach for machine unlearning in MLLMs and highlights the importance of further research in this area.
Reach us at info@study.space