Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition

Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition

13 Jun 2024 | Eleni Triantafillou, Peter Kairouz, Fabian Pedregosa, Jamie Hayes, Meghdad Kurmanji, Kairan Zhao, Vincent Dumoulin, Julio Jacques Junior, Ioannis Mitliagkas, Jun Wan, Lisheng Sun Hosoya, Sergio Escalera, Gintare Karolina Dziugaite, Peter Triantafillou, Isabelle Guyon
The paper presents the findings from the first NeurIPS competition on unlearning, which aimed to stimulate the development of novel algorithms and initiate discussions on formal and robust evaluation methodologies. The competition attracted nearly 1,200 teams from around the world, contributing a wide range of innovative solutions. The evaluation methodology developed for the competition measures forgetting quality according to a formal notion of unlearning, while incorporating model utility for a holistic evaluation. The paper analyzes the effectiveness of different instantiations of this evaluation framework in terms of computational cost and discusses the trade-offs between forgetting quality and model utility. The findings indicate progress in unlearning, with top-performing competition entries surpassing existing algorithms under the proposed evaluation framework. The analysis also explores the generalizability of different algorithms to new datasets, highlighting the importance of algorithmic principles that ensure strong performance on various metrics and subproblems. Overall, the paper contributes to advancing both benchmarking and algorithm development in the field of unlearning.The paper presents the findings from the first NeurIPS competition on unlearning, which aimed to stimulate the development of novel algorithms and initiate discussions on formal and robust evaluation methodologies. The competition attracted nearly 1,200 teams from around the world, contributing a wide range of innovative solutions. The evaluation methodology developed for the competition measures forgetting quality according to a formal notion of unlearning, while incorporating model utility for a holistic evaluation. The paper analyzes the effectiveness of different instantiations of this evaluation framework in terms of computational cost and discusses the trade-offs between forgetting quality and model utility. The findings indicate progress in unlearning, with top-performing competition entries surpassing existing algorithms under the proposed evaluation framework. The analysis also explores the generalizability of different algorithms to new datasets, highlighting the importance of algorithmic principles that ensure strong performance on various metrics and subproblems. Overall, the paper contributes to advancing both benchmarking and algorithm development in the field of unlearning.
Reach us at info@study.space