SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning

SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning

24 Jun 2024 | Jinghan Jia† Yihua Zhang† Yimeng Zhang† Jiancheng Liu† Bharat Runwal† James Diffenderfer‡ Bhavya Kailkhura‡ Sijia Liu†,§
The paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning" by Jinghan Jia et al. explores the impact of optimizer choice on large language model (LLM) unlearning, a process aimed at removing undesirable data influences and associated model capabilities without compromising utility. The authors highlight the significance of second-order optimization in LLM unlearning, linking it to influence unlearning, a classical approach using influence functions to update the model for data influence removal. They propose Second-Order UnLearning (SOUL), a second-order optimization-based LLM unlearning framework that extends the static, one-shot model update to a dynamic, iterative unlearning process. Extensive experiments across various unlearning tasks, models, and metrics demonstrate that SOUL consistently outperforms conventional first-order methods, offering an effective and broadly applicable solution for LLM unlearning. The paper also discusses the limitations of the study, including the need for further investigation on larger-scale models and robustness under diverse adversarial scenarios.The paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning" by Jinghan Jia et al. explores the impact of optimizer choice on large language model (LLM) unlearning, a process aimed at removing undesirable data influences and associated model capabilities without compromising utility. The authors highlight the significance of second-order optimization in LLM unlearning, linking it to influence unlearning, a classical approach using influence functions to update the model for data influence removal. They propose Second-Order UnLearning (SOUL), a second-order optimization-based LLM unlearning framework that extends the static, one-shot model update to a dynamic, iterative unlearning process. Extensive experiments across various unlearning tasks, models, and metrics demonstrate that SOUL consistently outperforms conventional first-order methods, offering an effective and broadly applicable solution for LLM unlearning. The paper also discusses the limitations of the study, including the need for further investigation on larger-scale models and robustness under diverse adversarial scenarios.
Reach us at info@study.space
[slides and audio] SOUL%3A Unlocking the Power of Second-Order Optimization for LLM Unlearning