Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning

Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning

9 Jul 2024 | Chongyu Fan*, Jiancheng Liu*, Alfred Hero, Sijia Liu
The paper addresses the challenge of identifying the worst-case forget set in machine unlearning (MU), a process aimed at eliminating the influence of specific data points from a trained model while preserving its utility. Traditional evaluations often rely on random data forgetting, which can lead to high variance and limited insights into the true performance of MU methods. To tackle this issue, the authors propose a bi-level optimization (BLO) framework to identify the most challenging subset of data for influence erasure, known as the worst-case forget set. This approach maximizes the difficulty of unlearning while maintaining model utility. The BLO framework is designed to be computationally efficient and adaptable to different unlearning scenarios, including class-wise and prompt-wise forgetting. Extensive experiments on various datasets and models, such as image classification and text-to-image generation, demonstrate the effectiveness of the proposed method in revealing the true performance of MU methods and providing a more reliable evaluation framework. The results also highlight the importance of data selection in MU and suggest potential directions for future research, such as incorporating curriculum learning to improve unlearning effectiveness.The paper addresses the challenge of identifying the worst-case forget set in machine unlearning (MU), a process aimed at eliminating the influence of specific data points from a trained model while preserving its utility. Traditional evaluations often rely on random data forgetting, which can lead to high variance and limited insights into the true performance of MU methods. To tackle this issue, the authors propose a bi-level optimization (BLO) framework to identify the most challenging subset of data for influence erasure, known as the worst-case forget set. This approach maximizes the difficulty of unlearning while maintaining model utility. The BLO framework is designed to be computationally efficient and adaptable to different unlearning scenarios, including class-wise and prompt-wise forgetting. Extensive experiments on various datasets and models, such as image classification and text-to-image generation, demonstrate the effectiveness of the proposed method in revealing the true performance of MU methods and providing a more reliable evaluation framework. The results also highlight the importance of data selection in MU and suggest potential directions for future research, such as incorporating curriculum learning to improve unlearning effectiveness.
Reach us at info@study.space
[slides] Challenging Forgets%3A Unveiling the Worst-Case Forget Sets in Machine Unlearning | StudySpace